00:00:00.000 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 1057 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3719 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.119 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.120 The recommended git tool is: git 00:00:00.120 using credential 00000000-0000-0000-0000-000000000002 00:00:00.126 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.159 Fetching changes from the remote Git repository 00:00:00.161 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.197 Using shallow fetch with depth 1 00:00:00.197 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.197 > git --version # timeout=10 00:00:00.220 > git --version # 'git version 2.39.2' 00:00:00.220 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.236 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.236 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.241 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.253 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.265 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.265 > git config core.sparsecheckout # timeout=10 00:00:07.276 > git read-tree -mu HEAD # timeout=10 00:00:07.292 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.314 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.315 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.399 [Pipeline] Start of Pipeline 00:00:07.411 [Pipeline] library 00:00:07.413 Loading library shm_lib@master 00:00:07.413 Library shm_lib@master is cached. Copying from home. 00:00:07.427 [Pipeline] node 00:00:07.438 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.440 [Pipeline] { 00:00:07.449 [Pipeline] catchError 00:00:07.450 [Pipeline] { 00:00:07.462 [Pipeline] wrap 00:00:07.471 [Pipeline] { 00:00:07.478 [Pipeline] stage 00:00:07.480 [Pipeline] { (Prologue) 00:00:07.700 [Pipeline] sh 00:00:07.984 + logger -p user.info -t JENKINS-CI 00:00:08.002 [Pipeline] echo 00:00:08.004 Node: WFP4 00:00:08.012 [Pipeline] sh 00:00:08.411 [Pipeline] setCustomBuildProperty 00:00:08.420 [Pipeline] echo 00:00:08.422 Cleanup processes 00:00:08.425 [Pipeline] sh 00:00:08.744 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.744 673807 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.754 [Pipeline] sh 00:00:09.032 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.032 ++ grep -v 'sudo pgrep' 00:00:09.032 ++ awk '{print $1}' 00:00:09.032 + sudo kill -9 00:00:09.032 + true 00:00:09.042 [Pipeline] cleanWs 00:00:09.048 [WS-CLEANUP] Deleting project workspace... 00:00:09.048 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.054 [WS-CLEANUP] done 00:00:09.058 [Pipeline] setCustomBuildProperty 00:00:09.075 [Pipeline] sh 00:00:09.356 + sudo git config --global --replace-all safe.directory '*' 00:00:09.438 [Pipeline] httpRequest 00:00:10.070 [Pipeline] echo 00:00:10.072 Sorcerer 10.211.164.20 is alive 00:00:10.083 [Pipeline] retry 00:00:10.086 [Pipeline] { 00:00:10.099 [Pipeline] httpRequest 00:00:10.104 HttpMethod: GET 00:00:10.104 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.105 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.121 Response Code: HTTP/1.1 200 OK 00:00:10.121 Success: Status code 200 is in the accepted range: 200,404 00:00:10.122 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:14.981 [Pipeline] } 00:00:14.998 [Pipeline] // retry 00:00:15.006 [Pipeline] sh 00:00:15.289 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:15.304 [Pipeline] httpRequest 00:00:15.671 [Pipeline] echo 00:00:15.673 Sorcerer 10.211.164.20 is alive 00:00:15.682 [Pipeline] retry 00:00:15.684 [Pipeline] { 00:00:15.698 [Pipeline] httpRequest 00:00:15.702 HttpMethod: GET 00:00:15.703 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:15.703 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:15.711 Response Code: HTTP/1.1 200 OK 00:00:15.711 Success: Status code 200 is in the accepted range: 200,404 00:00:15.711 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:20.487 [Pipeline] } 00:01:20.504 [Pipeline] // retry 00:01:20.511 [Pipeline] sh 00:01:20.795 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:23.342 [Pipeline] sh 00:01:23.629 + git -C spdk log --oneline -n5 00:01:23.629 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:23.629 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:23.629 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:23.629 66289a6db build: use VERSION file for storing version 00:01:23.629 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:23.653 [Pipeline] withCredentials 00:01:23.660 > git --version # timeout=10 00:01:23.671 > git --version # 'git version 2.39.2' 00:01:23.684 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:23.686 [Pipeline] { 00:01:23.693 [Pipeline] retry 00:01:23.695 [Pipeline] { 00:01:23.710 [Pipeline] sh 00:01:23.990 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:24.261 [Pipeline] } 00:01:24.278 [Pipeline] // retry 00:01:24.283 [Pipeline] } 00:01:24.300 [Pipeline] // withCredentials 00:01:24.309 [Pipeline] httpRequest 00:01:24.760 [Pipeline] echo 00:01:24.762 Sorcerer 10.211.164.20 is alive 00:01:24.771 [Pipeline] retry 00:01:24.774 [Pipeline] { 00:01:24.788 [Pipeline] httpRequest 00:01:24.792 HttpMethod: GET 00:01:24.793 URL: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.793 Sending request to url: http://10.211.164.20/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:24.807 Response Code: HTTP/1.1 200 OK 00:01:24.807 Success: Status code 200 is in the accepted range: 200,404 00:01:24.808 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:54.017 [Pipeline] } 00:01:54.035 [Pipeline] // retry 00:01:54.042 [Pipeline] sh 00:01:54.326 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:55.715 [Pipeline] sh 00:01:55.998 + git -C dpdk log --oneline -n5 00:01:55.998 eeb0605f11 version: 23.11.0 00:01:55.998 238778122a doc: update release notes for 23.11 00:01:55.998 46aa6b3cfc doc: fix description of RSS features 00:01:55.998 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:55.998 7e421ae345 devtools: support skipping forbid rule check 00:01:56.007 [Pipeline] } 00:01:56.021 [Pipeline] // stage 00:01:56.029 [Pipeline] stage 00:01:56.031 [Pipeline] { (Prepare) 00:01:56.048 [Pipeline] writeFile 00:01:56.062 [Pipeline] sh 00:01:56.345 + logger -p user.info -t JENKINS-CI 00:01:56.357 [Pipeline] sh 00:01:56.639 + logger -p user.info -t JENKINS-CI 00:01:56.650 [Pipeline] sh 00:01:56.932 + cat autorun-spdk.conf 00:01:56.932 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:56.932 SPDK_TEST_NVMF=1 00:01:56.932 SPDK_TEST_NVME_CLI=1 00:01:56.932 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:56.932 SPDK_TEST_NVMF_NICS=e810 00:01:56.932 SPDK_TEST_VFIOUSER=1 00:01:56.932 SPDK_RUN_UBSAN=1 00:01:56.933 NET_TYPE=phy 00:01:56.933 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:56.933 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:56.940 RUN_NIGHTLY=1 00:01:56.944 [Pipeline] readFile 00:01:56.965 [Pipeline] withEnv 00:01:56.966 [Pipeline] { 00:01:56.978 [Pipeline] sh 00:01:57.261 + set -ex 00:01:57.261 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:57.261 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:57.261 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:57.261 ++ SPDK_TEST_NVMF=1 00:01:57.261 ++ SPDK_TEST_NVME_CLI=1 00:01:57.261 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:57.261 ++ SPDK_TEST_NVMF_NICS=e810 00:01:57.261 ++ SPDK_TEST_VFIOUSER=1 00:01:57.261 ++ SPDK_RUN_UBSAN=1 00:01:57.261 ++ NET_TYPE=phy 00:01:57.261 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:57.261 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:57.261 ++ RUN_NIGHTLY=1 00:01:57.261 + case $SPDK_TEST_NVMF_NICS in 00:01:57.261 + DRIVERS=ice 00:01:57.261 + [[ tcp == \r\d\m\a ]] 00:01:57.261 + [[ -n ice ]] 00:01:57.261 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:57.261 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:57.261 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:57.261 rmmod: ERROR: Module i40iw is not currently loaded 00:01:57.261 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:57.261 + true 00:01:57.261 + for D in $DRIVERS 00:01:57.261 + sudo modprobe ice 00:01:57.261 + exit 0 00:01:57.269 [Pipeline] } 00:01:57.283 [Pipeline] // withEnv 00:01:57.287 [Pipeline] } 00:01:57.301 [Pipeline] // stage 00:01:57.309 [Pipeline] catchError 00:01:57.311 [Pipeline] { 00:01:57.324 [Pipeline] timeout 00:01:57.324 Timeout set to expire in 1 hr 0 min 00:01:57.325 [Pipeline] { 00:01:57.338 [Pipeline] stage 00:01:57.340 [Pipeline] { (Tests) 00:01:57.353 [Pipeline] sh 00:01:57.637 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.637 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.637 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.638 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:57.638 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:57.638 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:57.638 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.638 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:57.638 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:57.638 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:57.638 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:57.638 + source /etc/os-release 00:01:57.638 ++ NAME='Fedora Linux' 00:01:57.638 ++ VERSION='39 (Cloud Edition)' 00:01:57.638 ++ ID=fedora 00:01:57.638 ++ VERSION_ID=39 00:01:57.638 ++ VERSION_CODENAME= 00:01:57.638 ++ PLATFORM_ID=platform:f39 00:01:57.638 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:57.638 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:57.638 ++ LOGO=fedora-logo-icon 00:01:57.638 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:57.638 ++ HOME_URL=https://fedoraproject.org/ 00:01:57.638 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:57.638 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:57.638 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:57.638 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:57.638 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:57.638 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:57.638 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:57.638 ++ SUPPORT_END=2024-11-12 00:01:57.638 ++ VARIANT='Cloud Edition' 00:01:57.638 ++ VARIANT_ID=cloud 00:01:57.638 + uname -a 00:01:57.638 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:01:57.638 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:00.172 Hugepages 00:02:00.172 node hugesize free / total 00:02:00.172 node0 1048576kB 0 / 0 00:02:00.172 node0 2048kB 0 / 0 00:02:00.172 node1 1048576kB 0 / 0 00:02:00.172 node1 2048kB 0 / 0 00:02:00.172 00:02:00.172 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.172 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:00.172 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:00.172 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:00.172 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:00.172 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:00.172 + rm -f /tmp/spdk-ld-path 00:02:00.172 + source autorun-spdk.conf 00:02:00.172 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.172 ++ SPDK_TEST_NVMF=1 00:02:00.172 ++ SPDK_TEST_NVME_CLI=1 00:02:00.172 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.172 ++ SPDK_TEST_NVMF_NICS=e810 00:02:00.172 ++ SPDK_TEST_VFIOUSER=1 00:02:00.172 ++ SPDK_RUN_UBSAN=1 00:02:00.172 ++ NET_TYPE=phy 00:02:00.172 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:00.172 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.172 ++ RUN_NIGHTLY=1 00:02:00.172 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.172 + [[ -n '' ]] 00:02:00.172 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:00.172 + for M in /var/spdk/build-*-manifest.txt 00:02:00.172 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:00.172 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.172 + for M in /var/spdk/build-*-manifest.txt 00:02:00.172 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.172 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.172 + for M in /var/spdk/build-*-manifest.txt 00:02:00.172 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.172 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:00.172 ++ uname 00:02:00.172 + [[ Linux == \L\i\n\u\x ]] 00:02:00.172 + sudo dmesg -T 00:02:00.172 + sudo dmesg --clear 00:02:00.431 + dmesg_pid=675289 00:02:00.431 + [[ Fedora Linux == FreeBSD ]] 00:02:00.431 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.431 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.431 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.431 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.431 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.431 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.431 + sudo dmesg -Tw 00:02:00.431 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.431 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.431 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.431 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.431 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.431 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.431 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.431 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.431 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.431 06:07:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:00.431 06:07:51 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.431 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.431 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:00.431 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:00.431 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v23.11 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.432 06:07:51 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:00.432 06:07:51 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:00.432 06:07:51 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:00.432 06:07:51 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:00.432 06:07:51 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:00.432 06:07:51 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:00.432 06:07:51 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.432 06:07:51 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.432 06:07:51 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.432 06:07:51 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.432 06:07:51 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.432 06:07:51 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.432 06:07:51 -- paths/export.sh@5 -- $ export PATH 00:02:00.432 06:07:51 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.432 06:07:51 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:00.432 06:07:51 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:00.432 06:07:52 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734066472.XXXXXX 00:02:00.432 06:07:52 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734066472.UsuLXn 00:02:00.432 06:07:52 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:00.432 06:07:52 -- common/autobuild_common.sh@499 -- $ '[' -n v23.11 ']' 00:02:00.432 06:07:52 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:00.432 06:07:52 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:00.432 06:07:52 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:00.432 06:07:52 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.432 06:07:52 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:00.432 06:07:52 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:00.432 06:07:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.432 06:07:52 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:00.432 06:07:52 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:00.432 06:07:52 -- pm/common@17 -- $ local monitor 00:02:00.432 06:07:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.432 06:07:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.432 06:07:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.432 06:07:52 -- pm/common@21 -- $ date +%s 00:02:00.432 06:07:52 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.432 06:07:52 -- pm/common@21 -- $ date +%s 00:02:00.432 06:07:52 -- pm/common@25 -- $ sleep 1 00:02:00.432 06:07:52 -- pm/common@21 -- $ date +%s 00:02:00.432 06:07:52 -- pm/common@21 -- $ date +%s 00:02:00.432 06:07:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734066472 00:02:00.432 06:07:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734066472 00:02:00.432 06:07:52 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734066472 00:02:00.432 06:07:52 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734066472 00:02:00.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734066472_collect-cpu-load.pm.log 00:02:00.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734066472_collect-vmstat.pm.log 00:02:00.432 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734066472_collect-cpu-temp.pm.log 00:02:00.691 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734066472_collect-bmc-pm.bmc.pm.log 00:02:01.628 06:07:53 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:01.628 06:07:53 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:01.628 06:07:53 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:01.628 06:07:53 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.628 06:07:53 -- spdk/autobuild.sh@16 -- $ date -u 00:02:01.628 Fri Dec 13 05:07:53 AM UTC 2024 00:02:01.628 06:07:53 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:01.628 v25.01-rc1-2-ge01cb43b8 00:02:01.628 06:07:53 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:01.628 06:07:53 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:01.628 06:07:53 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:01.628 06:07:53 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:01.628 06:07:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.628 06:07:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.628 ************************************ 00:02:01.628 START TEST ubsan 00:02:01.628 ************************************ 00:02:01.628 06:07:53 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:01.628 using ubsan 00:02:01.628 00:02:01.628 real 0m0.000s 00:02:01.628 user 0m0.000s 00:02:01.628 sys 0m0.000s 00:02:01.628 06:07:53 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:01.628 06:07:53 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:01.628 ************************************ 00:02:01.628 END TEST ubsan 00:02:01.628 ************************************ 00:02:01.628 06:07:53 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:02:01.628 06:07:53 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:01.628 06:07:53 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:01.628 06:07:53 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:01.628 06:07:53 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:01.628 06:07:53 -- common/autotest_common.sh@10 -- $ set +x 00:02:01.628 ************************************ 00:02:01.628 START TEST build_native_dpdk 00:02:01.628 ************************************ 00:02:01.628 06:07:53 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:01.628 eeb0605f11 version: 23.11.0 00:02:01.628 238778122a doc: update release notes for 23.11 00:02:01.628 46aa6b3cfc doc: fix description of RSS features 00:02:01.628 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:02:01.628 7e421ae345 devtools: support skipping forbid rule check 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:01.628 06:07:53 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 23.11.0 21.11.0 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.628 06:07:53 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:01.629 patching file config/rte_config.h 00:02:01.629 Hunk #1 succeeded at 60 (offset 1 line). 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 23.11.0 24.07.0 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 23.11.0 '<' 24.07.0 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:01.629 patching file lib/pcapng/rte_pcapng.c 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 23.11.0 24.07.0 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 23.11.0 '>=' 24.07.0 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=23 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:01.629 06:07:53 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:01.629 06:07:53 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:06.900 The Meson build system 00:02:06.900 Version: 1.5.0 00:02:06.900 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:06.900 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:06.900 Build type: native build 00:02:06.900 Program cat found: YES (/usr/bin/cat) 00:02:06.900 Project name: DPDK 00:02:06.900 Project version: 23.11.0 00:02:06.900 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:06.900 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:06.900 Host machine cpu family: x86_64 00:02:06.900 Host machine cpu: x86_64 00:02:06.900 Message: ## Building in Developer Mode ## 00:02:06.900 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:06.900 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:06.900 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:06.900 Program python3 found: YES (/usr/bin/python3) 00:02:06.900 Program cat found: YES (/usr/bin/cat) 00:02:06.900 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:06.900 Compiler for C supports arguments -march=native: YES 00:02:06.900 Checking for size of "void *" : 8 00:02:06.900 Checking for size of "void *" : 8 (cached) 00:02:06.900 Library m found: YES 00:02:06.900 Library numa found: YES 00:02:06.900 Has header "numaif.h" : YES 00:02:06.900 Library fdt found: NO 00:02:06.900 Library execinfo found: NO 00:02:06.900 Has header "execinfo.h" : YES 00:02:06.900 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:06.900 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:06.900 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:06.900 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:06.900 Run-time dependency openssl found: YES 3.1.1 00:02:06.900 Run-time dependency libpcap found: YES 1.10.4 00:02:06.900 Has header "pcap.h" with dependency libpcap: YES 00:02:06.900 Compiler for C supports arguments -Wcast-qual: YES 00:02:06.900 Compiler for C supports arguments -Wdeprecated: YES 00:02:06.900 Compiler for C supports arguments -Wformat: YES 00:02:06.900 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:06.900 Compiler for C supports arguments -Wformat-security: NO 00:02:06.900 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:06.900 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:06.900 Compiler for C supports arguments -Wnested-externs: YES 00:02:06.900 Compiler for C supports arguments -Wold-style-definition: YES 00:02:06.900 Compiler for C supports arguments -Wpointer-arith: YES 00:02:06.900 Compiler for C supports arguments -Wsign-compare: YES 00:02:06.900 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:06.900 Compiler for C supports arguments -Wundef: YES 00:02:06.900 Compiler for C supports arguments -Wwrite-strings: YES 00:02:06.900 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:06.900 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:06.900 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:06.900 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:06.900 Program objdump found: YES (/usr/bin/objdump) 00:02:06.900 Compiler for C supports arguments -mavx512f: YES 00:02:06.900 Checking if "AVX512 checking" compiles: YES 00:02:06.900 Fetching value of define "__SSE4_2__" : 1 00:02:06.900 Fetching value of define "__AES__" : 1 00:02:06.900 Fetching value of define "__AVX__" : 1 00:02:06.900 Fetching value of define "__AVX2__" : 1 00:02:06.900 Fetching value of define "__AVX512BW__" : 1 00:02:06.900 Fetching value of define "__AVX512CD__" : 1 00:02:06.900 Fetching value of define "__AVX512DQ__" : 1 00:02:06.900 Fetching value of define "__AVX512F__" : 1 00:02:06.900 Fetching value of define "__AVX512VL__" : 1 00:02:06.900 Fetching value of define "__PCLMUL__" : 1 00:02:06.900 Fetching value of define "__RDRND__" : 1 00:02:06.900 Fetching value of define "__RDSEED__" : 1 00:02:06.900 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:06.900 Fetching value of define "__znver1__" : (undefined) 00:02:06.900 Fetching value of define "__znver2__" : (undefined) 00:02:06.900 Fetching value of define "__znver3__" : (undefined) 00:02:06.900 Fetching value of define "__znver4__" : (undefined) 00:02:06.900 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:06.900 Message: lib/log: Defining dependency "log" 00:02:06.900 Message: lib/kvargs: Defining dependency "kvargs" 00:02:06.900 Message: lib/telemetry: Defining dependency "telemetry" 00:02:06.900 Checking for function "getentropy" : NO 00:02:06.900 Message: lib/eal: Defining dependency "eal" 00:02:06.900 Message: lib/ring: Defining dependency "ring" 00:02:06.900 Message: lib/rcu: Defining dependency "rcu" 00:02:06.900 Message: lib/mempool: Defining dependency "mempool" 00:02:06.900 Message: lib/mbuf: Defining dependency "mbuf" 00:02:06.900 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.900 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:06.900 Compiler for C supports arguments -mpclmul: YES 00:02:06.900 Compiler for C supports arguments -maes: YES 00:02:06.900 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:06.900 Compiler for C supports arguments -mavx512bw: YES 00:02:06.900 Compiler for C supports arguments -mavx512dq: YES 00:02:06.900 Compiler for C supports arguments -mavx512vl: YES 00:02:06.900 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:06.900 Compiler for C supports arguments -mavx2: YES 00:02:06.900 Compiler for C supports arguments -mavx: YES 00:02:06.900 Message: lib/net: Defining dependency "net" 00:02:06.900 Message: lib/meter: Defining dependency "meter" 00:02:06.900 Message: lib/ethdev: Defining dependency "ethdev" 00:02:06.900 Message: lib/pci: Defining dependency "pci" 00:02:06.900 Message: lib/cmdline: Defining dependency "cmdline" 00:02:06.900 Message: lib/metrics: Defining dependency "metrics" 00:02:06.900 Message: lib/hash: Defining dependency "hash" 00:02:06.900 Message: lib/timer: Defining dependency "timer" 00:02:06.900 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:06.900 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.900 Message: lib/acl: Defining dependency "acl" 00:02:06.900 Message: lib/bbdev: Defining dependency "bbdev" 00:02:06.900 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:06.900 Run-time dependency libelf found: YES 0.191 00:02:06.900 Message: lib/bpf: Defining dependency "bpf" 00:02:06.900 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:06.900 Message: lib/compressdev: Defining dependency "compressdev" 00:02:06.900 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:06.900 Message: lib/distributor: Defining dependency "distributor" 00:02:06.900 Message: lib/dmadev: Defining dependency "dmadev" 00:02:06.900 Message: lib/efd: Defining dependency "efd" 00:02:06.900 Message: lib/eventdev: Defining dependency "eventdev" 00:02:06.900 Message: lib/dispatcher: Defining dependency "dispatcher" 00:02:06.900 Message: lib/gpudev: Defining dependency "gpudev" 00:02:06.900 Message: lib/gro: Defining dependency "gro" 00:02:06.900 Message: lib/gso: Defining dependency "gso" 00:02:06.900 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:06.900 Message: lib/jobstats: Defining dependency "jobstats" 00:02:06.901 Message: lib/latencystats: Defining dependency "latencystats" 00:02:06.901 Message: lib/lpm: Defining dependency "lpm" 00:02:06.901 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.901 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.901 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:06.901 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:06.901 Message: lib/member: Defining dependency "member" 00:02:06.901 Message: lib/pcapng: Defining dependency "pcapng" 00:02:06.901 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:06.901 Message: lib/power: Defining dependency "power" 00:02:06.901 Message: lib/rawdev: Defining dependency "rawdev" 00:02:06.901 Message: lib/regexdev: Defining dependency "regexdev" 00:02:06.901 Message: lib/mldev: Defining dependency "mldev" 00:02:06.901 Message: lib/rib: Defining dependency "rib" 00:02:06.901 Message: lib/reorder: Defining dependency "reorder" 00:02:06.901 Message: lib/sched: Defining dependency "sched" 00:02:06.901 Message: lib/security: Defining dependency "security" 00:02:06.901 Message: lib/stack: Defining dependency "stack" 00:02:06.901 Has header "linux/userfaultfd.h" : YES 00:02:06.901 Has header "linux/vduse.h" : YES 00:02:06.901 Message: lib/vhost: Defining dependency "vhost" 00:02:06.901 Message: lib/ipsec: Defining dependency "ipsec" 00:02:06.901 Message: lib/pdcp: Defining dependency "pdcp" 00:02:06.901 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:06.901 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:06.901 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:06.901 Message: lib/fib: Defining dependency "fib" 00:02:06.901 Message: lib/port: Defining dependency "port" 00:02:06.901 Message: lib/pdump: Defining dependency "pdump" 00:02:06.901 Message: lib/table: Defining dependency "table" 00:02:06.901 Message: lib/pipeline: Defining dependency "pipeline" 00:02:06.901 Message: lib/graph: Defining dependency "graph" 00:02:06.901 Message: lib/node: Defining dependency "node" 00:02:06.901 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:07.847 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:07.847 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:07.847 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:07.847 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:07.847 Compiler for C supports arguments -Wno-unused-value: YES 00:02:07.847 Compiler for C supports arguments -Wno-format: YES 00:02:07.847 Compiler for C supports arguments -Wno-format-security: YES 00:02:07.847 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:07.847 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:07.847 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:07.847 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:07.847 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:07.847 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:07.847 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:07.847 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:07.847 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:07.847 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:07.847 Has header "sys/epoll.h" : YES 00:02:07.847 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:07.847 Configuring doxy-api-html.conf using configuration 00:02:07.847 Configuring doxy-api-man.conf using configuration 00:02:07.847 Program mandb found: YES (/usr/bin/mandb) 00:02:07.847 Program sphinx-build found: NO 00:02:07.847 Configuring rte_build_config.h using configuration 00:02:07.847 Message: 00:02:07.847 ================= 00:02:07.847 Applications Enabled 00:02:07.847 ================= 00:02:07.847 00:02:07.847 apps: 00:02:07.847 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:02:07.847 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:02:07.847 test-pmd, test-regex, test-sad, test-security-perf, 00:02:07.847 00:02:07.847 Message: 00:02:07.847 ================= 00:02:07.847 Libraries Enabled 00:02:07.847 ================= 00:02:07.847 00:02:07.847 libs: 00:02:07.847 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:07.847 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:02:07.847 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:02:07.847 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:02:07.847 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:02:07.847 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:02:07.847 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:02:07.847 00:02:07.847 00:02:07.847 Message: 00:02:07.847 =============== 00:02:07.847 Drivers Enabled 00:02:07.847 =============== 00:02:07.847 00:02:07.847 common: 00:02:07.847 00:02:07.847 bus: 00:02:07.847 pci, vdev, 00:02:07.847 mempool: 00:02:07.847 ring, 00:02:07.847 dma: 00:02:07.847 00:02:07.847 net: 00:02:07.847 i40e, 00:02:07.847 raw: 00:02:07.847 00:02:07.847 crypto: 00:02:07.847 00:02:07.847 compress: 00:02:07.847 00:02:07.847 regex: 00:02:07.847 00:02:07.847 ml: 00:02:07.847 00:02:07.847 vdpa: 00:02:07.847 00:02:07.847 event: 00:02:07.847 00:02:07.847 baseband: 00:02:07.847 00:02:07.847 gpu: 00:02:07.847 00:02:07.847 00:02:07.847 Message: 00:02:07.847 ================= 00:02:07.847 Content Skipped 00:02:07.847 ================= 00:02:07.847 00:02:07.847 apps: 00:02:07.847 00:02:07.847 libs: 00:02:07.847 00:02:07.847 drivers: 00:02:07.847 common/cpt: not in enabled drivers build config 00:02:07.847 common/dpaax: not in enabled drivers build config 00:02:07.847 common/iavf: not in enabled drivers build config 00:02:07.847 common/idpf: not in enabled drivers build config 00:02:07.847 common/mvep: not in enabled drivers build config 00:02:07.847 common/octeontx: not in enabled drivers build config 00:02:07.847 bus/auxiliary: not in enabled drivers build config 00:02:07.847 bus/cdx: not in enabled drivers build config 00:02:07.847 bus/dpaa: not in enabled drivers build config 00:02:07.847 bus/fslmc: not in enabled drivers build config 00:02:07.847 bus/ifpga: not in enabled drivers build config 00:02:07.847 bus/platform: not in enabled drivers build config 00:02:07.847 bus/vmbus: not in enabled drivers build config 00:02:07.847 common/cnxk: not in enabled drivers build config 00:02:07.847 common/mlx5: not in enabled drivers build config 00:02:07.847 common/nfp: not in enabled drivers build config 00:02:07.847 common/qat: not in enabled drivers build config 00:02:07.847 common/sfc_efx: not in enabled drivers build config 00:02:07.847 mempool/bucket: not in enabled drivers build config 00:02:07.847 mempool/cnxk: not in enabled drivers build config 00:02:07.847 mempool/dpaa: not in enabled drivers build config 00:02:07.847 mempool/dpaa2: not in enabled drivers build config 00:02:07.847 mempool/octeontx: not in enabled drivers build config 00:02:07.847 mempool/stack: not in enabled drivers build config 00:02:07.847 dma/cnxk: not in enabled drivers build config 00:02:07.847 dma/dpaa: not in enabled drivers build config 00:02:07.847 dma/dpaa2: not in enabled drivers build config 00:02:07.847 dma/hisilicon: not in enabled drivers build config 00:02:07.847 dma/idxd: not in enabled drivers build config 00:02:07.847 dma/ioat: not in enabled drivers build config 00:02:07.847 dma/skeleton: not in enabled drivers build config 00:02:07.847 net/af_packet: not in enabled drivers build config 00:02:07.847 net/af_xdp: not in enabled drivers build config 00:02:07.847 net/ark: not in enabled drivers build config 00:02:07.847 net/atlantic: not in enabled drivers build config 00:02:07.847 net/avp: not in enabled drivers build config 00:02:07.847 net/axgbe: not in enabled drivers build config 00:02:07.847 net/bnx2x: not in enabled drivers build config 00:02:07.847 net/bnxt: not in enabled drivers build config 00:02:07.847 net/bonding: not in enabled drivers build config 00:02:07.847 net/cnxk: not in enabled drivers build config 00:02:07.848 net/cpfl: not in enabled drivers build config 00:02:07.848 net/cxgbe: not in enabled drivers build config 00:02:07.848 net/dpaa: not in enabled drivers build config 00:02:07.848 net/dpaa2: not in enabled drivers build config 00:02:07.848 net/e1000: not in enabled drivers build config 00:02:07.848 net/ena: not in enabled drivers build config 00:02:07.848 net/enetc: not in enabled drivers build config 00:02:07.848 net/enetfec: not in enabled drivers build config 00:02:07.848 net/enic: not in enabled drivers build config 00:02:07.848 net/failsafe: not in enabled drivers build config 00:02:07.848 net/fm10k: not in enabled drivers build config 00:02:07.848 net/gve: not in enabled drivers build config 00:02:07.848 net/hinic: not in enabled drivers build config 00:02:07.848 net/hns3: not in enabled drivers build config 00:02:07.848 net/iavf: not in enabled drivers build config 00:02:07.848 net/ice: not in enabled drivers build config 00:02:07.848 net/idpf: not in enabled drivers build config 00:02:07.848 net/igc: not in enabled drivers build config 00:02:07.848 net/ionic: not in enabled drivers build config 00:02:07.848 net/ipn3ke: not in enabled drivers build config 00:02:07.848 net/ixgbe: not in enabled drivers build config 00:02:07.848 net/mana: not in enabled drivers build config 00:02:07.848 net/memif: not in enabled drivers build config 00:02:07.848 net/mlx4: not in enabled drivers build config 00:02:07.848 net/mlx5: not in enabled drivers build config 00:02:07.848 net/mvneta: not in enabled drivers build config 00:02:07.848 net/mvpp2: not in enabled drivers build config 00:02:07.848 net/netvsc: not in enabled drivers build config 00:02:07.848 net/nfb: not in enabled drivers build config 00:02:07.848 net/nfp: not in enabled drivers build config 00:02:07.848 net/ngbe: not in enabled drivers build config 00:02:07.848 net/null: not in enabled drivers build config 00:02:07.848 net/octeontx: not in enabled drivers build config 00:02:07.848 net/octeon_ep: not in enabled drivers build config 00:02:07.848 net/pcap: not in enabled drivers build config 00:02:07.848 net/pfe: not in enabled drivers build config 00:02:07.848 net/qede: not in enabled drivers build config 00:02:07.848 net/ring: not in enabled drivers build config 00:02:07.848 net/sfc: not in enabled drivers build config 00:02:07.848 net/softnic: not in enabled drivers build config 00:02:07.848 net/tap: not in enabled drivers build config 00:02:07.848 net/thunderx: not in enabled drivers build config 00:02:07.848 net/txgbe: not in enabled drivers build config 00:02:07.848 net/vdev_netvsc: not in enabled drivers build config 00:02:07.848 net/vhost: not in enabled drivers build config 00:02:07.848 net/virtio: not in enabled drivers build config 00:02:07.848 net/vmxnet3: not in enabled drivers build config 00:02:07.848 raw/cnxk_bphy: not in enabled drivers build config 00:02:07.848 raw/cnxk_gpio: not in enabled drivers build config 00:02:07.848 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:07.848 raw/ifpga: not in enabled drivers build config 00:02:07.848 raw/ntb: not in enabled drivers build config 00:02:07.848 raw/skeleton: not in enabled drivers build config 00:02:07.848 crypto/armv8: not in enabled drivers build config 00:02:07.848 crypto/bcmfs: not in enabled drivers build config 00:02:07.848 crypto/caam_jr: not in enabled drivers build config 00:02:07.848 crypto/ccp: not in enabled drivers build config 00:02:07.848 crypto/cnxk: not in enabled drivers build config 00:02:07.848 crypto/dpaa_sec: not in enabled drivers build config 00:02:07.848 crypto/dpaa2_sec: not in enabled drivers build config 00:02:07.848 crypto/ipsec_mb: not in enabled drivers build config 00:02:07.848 crypto/mlx5: not in enabled drivers build config 00:02:07.848 crypto/mvsam: not in enabled drivers build config 00:02:07.848 crypto/nitrox: not in enabled drivers build config 00:02:07.848 crypto/null: not in enabled drivers build config 00:02:07.848 crypto/octeontx: not in enabled drivers build config 00:02:07.848 crypto/openssl: not in enabled drivers build config 00:02:07.848 crypto/scheduler: not in enabled drivers build config 00:02:07.848 crypto/uadk: not in enabled drivers build config 00:02:07.848 crypto/virtio: not in enabled drivers build config 00:02:07.848 compress/isal: not in enabled drivers build config 00:02:07.848 compress/mlx5: not in enabled drivers build config 00:02:07.848 compress/octeontx: not in enabled drivers build config 00:02:07.848 compress/zlib: not in enabled drivers build config 00:02:07.848 regex/mlx5: not in enabled drivers build config 00:02:07.848 regex/cn9k: not in enabled drivers build config 00:02:07.848 ml/cnxk: not in enabled drivers build config 00:02:07.848 vdpa/ifc: not in enabled drivers build config 00:02:07.848 vdpa/mlx5: not in enabled drivers build config 00:02:07.848 vdpa/nfp: not in enabled drivers build config 00:02:07.848 vdpa/sfc: not in enabled drivers build config 00:02:07.848 event/cnxk: not in enabled drivers build config 00:02:07.848 event/dlb2: not in enabled drivers build config 00:02:07.848 event/dpaa: not in enabled drivers build config 00:02:07.848 event/dpaa2: not in enabled drivers build config 00:02:07.848 event/dsw: not in enabled drivers build config 00:02:07.848 event/opdl: not in enabled drivers build config 00:02:07.848 event/skeleton: not in enabled drivers build config 00:02:07.848 event/sw: not in enabled drivers build config 00:02:07.848 event/octeontx: not in enabled drivers build config 00:02:07.848 baseband/acc: not in enabled drivers build config 00:02:07.848 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:07.848 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:07.848 baseband/la12xx: not in enabled drivers build config 00:02:07.848 baseband/null: not in enabled drivers build config 00:02:07.848 baseband/turbo_sw: not in enabled drivers build config 00:02:07.848 gpu/cuda: not in enabled drivers build config 00:02:07.848 00:02:07.848 00:02:07.848 Build targets in project: 217 00:02:07.848 00:02:07.848 DPDK 23.11.0 00:02:07.848 00:02:07.848 User defined options 00:02:07.848 libdir : lib 00:02:07.848 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:07.848 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:07.848 c_link_args : 00:02:07.848 enable_docs : false 00:02:07.848 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:07.848 enable_kmods : false 00:02:07.848 machine : native 00:02:07.848 tests : false 00:02:07.848 00:02:07.848 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:07.848 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:07.848 06:07:59 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:07.848 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:07.848 [1/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:07.848 [2/707] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:07.848 [3/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:07.848 [4/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:08.110 [5/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:08.110 [6/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:08.110 [7/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:08.110 [8/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:08.110 [9/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:08.110 [10/707] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:08.110 [11/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:08.110 [12/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:08.110 [13/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:08.110 [14/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:08.110 [15/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:08.110 [16/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:08.110 [17/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:08.110 [18/707] Linking static target lib/librte_kvargs.a 00:02:08.110 [19/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:08.110 [20/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:08.110 [21/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:08.110 [22/707] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:08.110 [23/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:08.110 [24/707] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:08.110 [25/707] Linking static target lib/librte_log.a 00:02:08.110 [26/707] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:08.110 [27/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:08.110 [28/707] Linking static target lib/librte_pci.a 00:02:08.368 [29/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:08.368 [30/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:08.368 [31/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:08.368 [32/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:08.368 [33/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:08.368 [34/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:08.368 [35/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:08.368 [36/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:08.368 [37/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:08.633 [38/707] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.633 [39/707] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:08.633 [40/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:08.633 [41/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:08.633 [42/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:08.633 [43/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:08.633 [44/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:08.633 [45/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:08.633 [46/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:08.633 [47/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:08.633 [48/707] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:08.633 [49/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:08.633 [50/707] Linking static target lib/librte_meter.a 00:02:08.633 [51/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:08.633 [52/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:08.633 [53/707] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:08.633 [54/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:08.633 [55/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:08.633 [56/707] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:08.633 [57/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:08.633 [58/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:08.633 [59/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:08.633 [60/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:08.633 [61/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:08.633 [62/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:08.633 [63/707] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:08.633 [64/707] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:08.633 [65/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:08.633 [66/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:08.633 [67/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:08.633 [68/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:08.896 [69/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:08.896 [70/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:08.896 [71/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:08.896 [72/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:08.896 [73/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:08.896 [74/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:08.896 [75/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:08.896 [76/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:08.896 [77/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:08.896 [78/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:08.896 [79/707] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:08.896 [80/707] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:08.896 [81/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:08.896 [82/707] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:08.896 [83/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:08.896 [84/707] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:08.896 [85/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:08.896 [86/707] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:08.896 [87/707] Linking static target lib/librte_ring.a 00:02:08.896 [88/707] Linking static target lib/librte_cmdline.a 00:02:08.896 [89/707] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:08.896 [90/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:08.896 [91/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:08.896 [92/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:08.896 [93/707] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:08.896 [94/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:08.896 [95/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:08.896 [96/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:08.896 [97/707] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:08.896 [98/707] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:08.896 [99/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:08.896 [100/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:08.896 [101/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:08.896 [102/707] Linking static target lib/librte_net.a 00:02:08.896 [103/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:08.896 [104/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:09.158 [105/707] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.158 [106/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:09.158 [107/707] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:09.158 [108/707] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:09.158 [109/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:09.158 [110/707] Linking static target lib/librte_metrics.a 00:02:09.158 [111/707] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.158 [112/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:09.158 [113/707] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:09.158 [114/707] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:09.158 [115/707] Linking target lib/librte_log.so.24.0 00:02:09.158 [116/707] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:09.158 [117/707] Linking static target lib/librte_cfgfile.a 00:02:09.158 [118/707] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:09.158 [119/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:09.158 [120/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:09.158 [121/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:09.158 [122/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:09.158 [123/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:09.158 [124/707] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:09.158 [125/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:09.158 [126/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:09.419 [127/707] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:09.419 [128/707] Linking static target lib/librte_bitratestats.a 00:02:09.419 [129/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:09.419 [130/707] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.419 [131/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:09.419 [132/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:09.419 [133/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:09.419 [134/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:09.419 [135/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:09.419 [136/707] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.419 [137/707] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:02:09.419 [138/707] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:09.419 [139/707] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:09.419 [140/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:09.419 [141/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:09.419 [142/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:09.419 [143/707] Linking target lib/librte_kvargs.so.24.0 00:02:09.419 [144/707] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:09.419 [145/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:09.419 [146/707] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:09.419 [147/707] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:09.419 [148/707] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:09.419 [149/707] Linking static target lib/librte_mempool.a 00:02:09.419 [150/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:09.684 [151/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:09.684 [152/707] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:09.684 [153/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:09.684 [154/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:09.684 [155/707] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:09.684 [156/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:09.684 [157/707] Linking static target lib/librte_timer.a 00:02:09.684 [158/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:09.684 [159/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:09.684 [160/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:09.684 [161/707] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:09.684 [162/707] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:09.684 [163/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:09.684 [164/707] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:09.684 [165/707] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.684 [166/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:09.684 [167/707] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.684 [168/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:09.684 [169/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:09.684 [170/707] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:09.684 [171/707] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:09.684 [172/707] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:09.684 [173/707] Linking static target lib/librte_bbdev.a 00:02:09.684 [174/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:02:09.684 [175/707] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:09.684 [176/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:09.685 [177/707] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:09.685 [178/707] Linking static target lib/librte_telemetry.a 00:02:09.685 [179/707] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:09.946 [180/707] Linking static target lib/librte_compressdev.a 00:02:09.946 [181/707] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:09.946 [182/707] Linking static target lib/librte_jobstats.a 00:02:09.946 [183/707] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:09.946 [184/707] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.946 [185/707] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:02:09.946 [186/707] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:09.946 [187/707] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:02:09.946 [188/707] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:09.946 [189/707] Linking static target lib/librte_dispatcher.a 00:02:09.946 [190/707] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:09.946 [191/707] Linking static target lib/librte_gpudev.a 00:02:09.946 [192/707] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:09.946 [193/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:09.946 [194/707] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:09.946 [195/707] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:09.946 [196/707] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:09.946 [197/707] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:09.946 [198/707] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:09.946 [199/707] Linking static target lib/librte_distributor.a 00:02:10.213 [200/707] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:10.213 [201/707] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:10.213 [202/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:02:10.213 [203/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:10.213 [204/707] Linking static target lib/librte_latencystats.a 00:02:10.213 [205/707] Linking static target lib/librte_mbuf.a 00:02:10.213 [206/707] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:10.213 [207/707] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:10.213 [208/707] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:10.213 [209/707] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:10.213 [210/707] Linking static target lib/librte_rcu.a 00:02:10.213 [211/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:10.213 [212/707] Linking static target lib/librte_gro.a 00:02:10.213 [213/707] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:10.214 [214/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:10.214 [215/707] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:10.214 [216/707] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:10.214 [217/707] Linking static target lib/librte_dmadev.a 00:02:10.214 [218/707] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:10.214 [219/707] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.214 [220/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:02:10.214 [221/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:10.214 [222/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:10.214 [223/707] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:10.214 [224/707] Linking static target lib/librte_eal.a 00:02:10.214 [225/707] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:10.214 [226/707] Linking static target lib/librte_ip_frag.a 00:02:10.214 [227/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:10.214 [228/707] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:10.214 [229/707] Linking static target lib/librte_gso.a 00:02:10.214 [230/707] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:10.214 [231/707] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.214 [232/707] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:10.214 [233/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:10.214 [234/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:10.214 [235/707] Linking static target lib/librte_stack.a 00:02:10.214 [236/707] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:10.214 [237/707] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:10.214 [238/707] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:10.214 [239/707] Linking static target lib/librte_regexdev.a 00:02:10.214 [240/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:02:10.478 [241/707] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:10.478 [242/707] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.478 [243/707] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:02:10.478 [244/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:02:10.478 [245/707] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:10.478 [246/707] Linking static target lib/librte_rawdev.a 00:02:10.478 [247/707] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.478 [248/707] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:10.478 [249/707] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:10.478 [250/707] Linking static target lib/librte_pcapng.a 00:02:10.478 [251/707] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:10.478 [252/707] Linking static target lib/librte_power.a 00:02:10.478 [253/707] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:10.478 [254/707] Linking static target lib/librte_bpf.a 00:02:10.478 [255/707] Linking target lib/librte_telemetry.so.24.0 00:02:10.478 [256/707] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:02:10.478 [257/707] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.478 [258/707] Linking static target lib/librte_mldev.a 00:02:10.738 [259/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:10.738 [260/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:10.738 [261/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:02:10.738 [262/707] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:10.738 [263/707] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [264/707] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [265/707] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [266/707] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [267/707] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [268/707] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:10.738 [269/707] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:10.738 [270/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:10.738 [271/707] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:10.738 [272/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:02:10.738 [273/707] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [274/707] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [275/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:02:10.738 [276/707] Linking static target lib/librte_security.a 00:02:10.738 [277/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:10.738 [278/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:10.738 [279/707] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:10.738 [280/707] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:10.738 [281/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:02:10.738 [282/707] Linking static target lib/librte_reorder.a 00:02:10.738 [283/707] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [284/707] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [285/707] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:10.738 [286/707] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:02:10.738 [287/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:10.738 [288/707] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:10.738 [289/707] Linking static target lib/librte_lpm.a 00:02:10.738 [290/707] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:10.738 [291/707] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.000 [292/707] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:11.000 [293/707] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:11.000 [294/707] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:11.000 [295/707] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.000 [296/707] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:02:11.000 [297/707] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:11.000 [298/707] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.000 [299/707] Linking static target lib/librte_efd.a 00:02:11.000 [300/707] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.000 [301/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:11.000 [302/707] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:11.000 [303/707] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:11.262 [304/707] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:11.262 [305/707] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:11.262 [306/707] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:11.262 [307/707] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:11.262 [308/707] Linking static target lib/librte_rib.a 00:02:11.262 [309/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:11.262 [310/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:11.262 [311/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:11.262 [312/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:11.262 [313/707] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.262 [314/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:11.262 [315/707] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.262 [316/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:11.262 [317/707] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:11.262 [318/707] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:11.262 [319/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:11.262 [320/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:11.262 [321/707] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:11.526 [322/707] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.526 [323/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:11.526 [324/707] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:11.526 [325/707] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:11.526 [326/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:02:11.526 [327/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:11.526 [328/707] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:11.526 [329/707] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.526 [330/707] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:11.526 [331/707] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:11.527 [332/707] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.527 [333/707] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:11.527 [334/707] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.527 [335/707] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:11.527 [336/707] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:11.527 [337/707] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:11.527 [338/707] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:02:11.527 [339/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:11.527 [340/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:11.527 [341/707] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:11.786 [342/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:11.786 [343/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:11.786 [344/707] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.786 [345/707] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:11.786 [346/707] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:11.786 [347/707] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:11.786 [348/707] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:11.786 [349/707] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:11.786 [350/707] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:11.786 [351/707] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:11.786 [352/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:11.786 [353/707] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:11.786 [354/707] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:11.786 [355/707] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:11.786 [356/707] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:11.786 [357/707] Linking static target lib/librte_fib.a 00:02:11.786 [358/707] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:11.786 [359/707] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:02:12.048 [360/707] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:02:12.048 [361/707] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:02:12.048 [362/707] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:12.048 [363/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:12.048 [364/707] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:12.048 [365/707] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.048 [366/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:12.048 [367/707] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:12.048 [368/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:12.048 [369/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:12.048 [370/707] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:12.048 [371/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:12.048 [372/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:12.048 [373/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:12.048 [374/707] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:02:12.048 [375/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:12.048 [376/707] Linking static target lib/librte_graph.a 00:02:12.048 [377/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:12.319 [378/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:12.319 [379/707] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:12.319 [380/707] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:12.319 [381/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:12.319 [382/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:12.319 [383/707] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:12.319 [384/707] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:02:12.319 [385/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:12.319 [386/707] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:02:12.319 [387/707] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:02:12.319 [388/707] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:02:12.319 [389/707] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:02:12.319 [390/707] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:12.319 [391/707] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:02:12.319 [392/707] Linking static target lib/librte_pdump.a 00:02:12.581 [393/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:12.581 [394/707] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:02:12.581 [395/707] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:02:12.581 [396/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:12.581 [397/707] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:12.581 [398/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:12.581 [399/707] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:02:12.581 [400/707] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:12.581 [401/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:12.581 [402/707] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:12.581 [403/707] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:02:12.581 [404/707] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.581 [405/707] Linking static target lib/librte_sched.a 00:02:12.581 [406/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:12.581 [407/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:12.581 [408/707] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:02:12.581 [409/707] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:12.581 [410/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:12.581 [411/707] Linking static target lib/librte_cryptodev.a 00:02:12.581 [412/707] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:12.848 [413/707] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:02:12.848 [414/707] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:12.848 [415/707] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:02:12.848 [416/707] Linking static target lib/librte_table.a 00:02:12.848 [417/707] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.848 [418/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:12.848 [419/707] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:02:12.848 [420/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:02:12.848 [421/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:02:12.848 [422/707] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:12.848 [423/707] Linking static target drivers/librte_bus_vdev.a 00:02:12.848 [424/707] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:02:12.848 [425/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:12.848 [426/707] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:12.848 [427/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:12.848 [428/707] Linking static target lib/librte_ipsec.a 00:02:12.848 [429/707] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.848 [430/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:12.848 [431/707] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:12.848 [432/707] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:12.848 [433/707] Linking static target lib/librte_member.a 00:02:12.848 [434/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:12.848 [435/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:12.848 [436/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:13.118 [437/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:02:13.118 [438/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:13.118 [439/707] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:13.118 [440/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:02:13.118 [441/707] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:13.118 [442/707] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.118 [443/707] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:02:13.118 [444/707] Linking static target drivers/librte_bus_pci.a 00:02:13.118 [445/707] Linking static target lib/acl/libavx2_tmp.a 00:02:13.118 [446/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:13.118 [447/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:13.118 [448/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:13.118 [449/707] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:13.118 [450/707] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:13.118 [451/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:02:13.118 [452/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:13.118 [453/707] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.118 [454/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:13.381 [455/707] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:13.381 [456/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:13.381 [457/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:13.381 [458/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:02:13.381 [459/707] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:13.381 [460/707] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:13.381 [461/707] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.381 [462/707] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.381 [463/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:13.381 [464/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:13.381 [465/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:02:13.381 [466/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:13.381 [467/707] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.381 [468/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:13.381 [469/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:13.381 [470/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:02:13.381 [471/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:13.652 [472/707] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.652 [473/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:02:13.652 [474/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:02:13.652 [475/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:02:13.652 [476/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:13.652 [477/707] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:13.652 [478/707] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:02:13.652 [479/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:02:13.652 [480/707] Linking static target lib/librte_pdcp.a 00:02:13.652 [481/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:13.652 [482/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:13.652 [483/707] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:02:13.652 [484/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:02:13.652 [485/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:13.652 [486/707] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:13.652 [487/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:13.652 [488/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:02:13.652 [489/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:13.652 [490/707] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:13.652 [491/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:13.913 [492/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:13.913 [493/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:13.913 [494/707] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:13.913 [495/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:13.913 [496/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:13.913 [497/707] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:13.913 [498/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:13.913 [499/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:13.913 [500/707] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:13.913 [501/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:13.913 [502/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:13.913 [503/707] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.913 [504/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:13.913 [505/707] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:13.913 [506/707] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:02:13.913 [507/707] Linking static target drivers/librte_mempool_ring.a 00:02:13.913 [508/707] Linking static target lib/librte_node.a 00:02:13.913 [509/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:13.913 [510/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:13.913 [511/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:13.913 [512/707] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.913 [513/707] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:13.913 [514/707] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:13.913 [515/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:02:13.913 [516/707] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:13.913 [517/707] Linking static target lib/librte_port.a 00:02:13.913 [518/707] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:14.172 [519/707] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:14.172 [520/707] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:14.172 [521/707] Linking static target lib/librte_hash.a 00:02:14.172 [522/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:14.172 [523/707] Linking static target lib/librte_eventdev.a 00:02:14.172 [524/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:14.172 [525/707] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.172 [526/707] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:14.172 [527/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:14.172 [528/707] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:14.172 [529/707] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:14.172 [530/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:14.172 [531/707] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:14.172 [532/707] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:14.172 [533/707] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:02:14.172 [534/707] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:14.172 [535/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:14.172 [536/707] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:14.172 [537/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:14.172 [538/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:14.172 [539/707] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.430 [540/707] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:14.430 [541/707] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:14.430 [542/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:14.430 [543/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:14.430 [544/707] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:14.430 [545/707] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:14.430 [546/707] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:02:14.430 [547/707] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:02:14.430 [548/707] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:14.430 [549/707] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:14.430 [550/707] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:14.430 [551/707] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:14.430 [552/707] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:14.430 [553/707] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:14.430 [554/707] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.430 [555/707] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:14.687 [556/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:14.687 [557/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:14.687 [558/707] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:14.687 [559/707] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:14.687 [560/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:14.687 [561/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:14.687 [562/707] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:14.687 [563/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:14.687 [564/707] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.687 [565/707] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:14.687 [566/707] Linking static target lib/librte_acl.a 00:02:14.687 [567/707] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.944 [568/707] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:14.944 [569/707] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:15.202 [570/707] Linking static target lib/librte_ethdev.a 00:02:15.202 [571/707] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:15.202 [572/707] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:02:15.202 [573/707] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.202 [574/707] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:15.459 [575/707] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:15.716 [576/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:15.716 [577/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:15.974 [578/707] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:15.974 [579/707] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:16.231 [580/707] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:16.798 [581/707] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:16.798 [582/707] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:17.055 [583/707] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:17.055 [584/707] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:17.312 [585/707] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:17.312 [586/707] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:17.312 [587/707] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:17.312 [588/707] Linking static target drivers/librte_net_i40e.a 00:02:17.878 [589/707] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:18.136 [590/707] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.394 [591/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:18.960 [592/707] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:20.861 [593/707] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:20.861 [594/707] Linking target lib/librte_eal.so.24.0 00:02:21.119 [595/707] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:02:21.119 [596/707] Linking target lib/librte_dmadev.so.24.0 00:02:21.119 [597/707] Linking target lib/librte_meter.so.24.0 00:02:21.119 [598/707] Linking target lib/librte_ring.so.24.0 00:02:21.119 [599/707] Linking target lib/librte_rawdev.so.24.0 00:02:21.119 [600/707] Linking target lib/librte_timer.so.24.0 00:02:21.119 [601/707] Linking target lib/librte_pci.so.24.0 00:02:21.119 [602/707] Linking target lib/librte_cfgfile.so.24.0 00:02:21.119 [603/707] Linking target lib/librte_stack.so.24.0 00:02:21.119 [604/707] Linking target drivers/librte_bus_vdev.so.24.0 00:02:21.119 [605/707] Linking target lib/librte_jobstats.so.24.0 00:02:21.119 [606/707] Linking target lib/librte_acl.so.24.0 00:02:21.119 [607/707] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:02:21.119 [608/707] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:02:21.119 [609/707] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:02:21.119 [610/707] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:02:21.119 [611/707] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:02:21.119 [612/707] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:02:21.377 [613/707] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:02:21.378 [614/707] Linking target lib/librte_rcu.so.24.0 00:02:21.378 [615/707] Linking target lib/librte_mempool.so.24.0 00:02:21.378 [616/707] Linking target drivers/librte_bus_pci.so.24.0 00:02:21.378 [617/707] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:02:21.378 [618/707] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:02:21.378 [619/707] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:02:21.378 [620/707] Linking target lib/librte_rib.so.24.0 00:02:21.378 [621/707] Linking target drivers/librte_mempool_ring.so.24.0 00:02:21.378 [622/707] Linking target lib/librte_mbuf.so.24.0 00:02:21.636 [623/707] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:02:21.636 [624/707] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:02:21.636 [625/707] Linking target lib/librte_fib.so.24.0 00:02:21.636 [626/707] Linking target lib/librte_gpudev.so.24.0 00:02:21.636 [627/707] Linking target lib/librte_net.so.24.0 00:02:21.636 [628/707] Linking target lib/librte_bbdev.so.24.0 00:02:21.636 [629/707] Linking target lib/librte_reorder.so.24.0 00:02:21.636 [630/707] Linking target lib/librte_compressdev.so.24.0 00:02:21.636 [631/707] Linking target lib/librte_distributor.so.24.0 00:02:21.636 [632/707] Linking target lib/librte_cryptodev.so.24.0 00:02:21.636 [633/707] Linking target lib/librte_sched.so.24.0 00:02:21.636 [634/707] Linking target lib/librte_regexdev.so.24.0 00:02:21.636 [635/707] Linking target lib/librte_mldev.so.24.0 00:02:21.896 [636/707] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:02:21.896 [637/707] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:02:21.896 [638/707] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:02:21.896 [639/707] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:02:21.896 [640/707] Linking target lib/librte_cmdline.so.24.0 00:02:21.896 [641/707] Linking target lib/librte_hash.so.24.0 00:02:21.896 [642/707] Linking target lib/librte_security.so.24.0 00:02:21.896 [643/707] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:02:21.896 [644/707] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:02:22.154 [645/707] Linking target lib/librte_efd.so.24.0 00:02:22.155 [646/707] Linking target lib/librte_lpm.so.24.0 00:02:22.155 [647/707] Linking target lib/librte_member.so.24.0 00:02:22.155 [648/707] Linking target lib/librte_ipsec.so.24.0 00:02:22.155 [649/707] Linking target lib/librte_pdcp.so.24.0 00:02:22.155 [650/707] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:02:22.155 [651/707] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:02:22.413 [652/707] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:22.670 [653/707] Linking target lib/librte_ethdev.so.24.0 00:02:22.670 [654/707] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:02:22.670 [655/707] Linking target lib/librte_metrics.so.24.0 00:02:22.670 [656/707] Linking target lib/librte_pcapng.so.24.0 00:02:22.928 [657/707] Linking target lib/librte_bpf.so.24.0 00:02:22.928 [658/707] Linking target lib/librte_gro.so.24.0 00:02:22.928 [659/707] Linking target lib/librte_gso.so.24.0 00:02:22.928 [660/707] Linking target lib/librte_ip_frag.so.24.0 00:02:22.928 [661/707] Linking target lib/librte_power.so.24.0 00:02:22.928 [662/707] Linking target lib/librte_eventdev.so.24.0 00:02:22.928 [663/707] Linking target drivers/librte_net_i40e.so.24.0 00:02:22.928 [664/707] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:02:22.928 [665/707] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:02:22.928 [666/707] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:02:22.928 [667/707] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:02:22.928 [668/707] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:02:22.928 [669/707] Linking target lib/librte_latencystats.so.24.0 00:02:22.928 [670/707] Linking target lib/librte_bitratestats.so.24.0 00:02:22.928 [671/707] Linking target lib/librte_graph.so.24.0 00:02:22.928 [672/707] Linking target lib/librte_pdump.so.24.0 00:02:22.928 [673/707] Linking target lib/librte_dispatcher.so.24.0 00:02:22.928 [674/707] Linking target lib/librte_port.so.24.0 00:02:23.184 [675/707] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:02:23.184 [676/707] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:02:23.184 [677/707] Linking target lib/librte_node.so.24.0 00:02:23.184 [678/707] Linking target lib/librte_table.so.24.0 00:02:23.442 [679/707] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:02:25.971 [680/707] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:25.971 [681/707] Linking static target lib/librte_pipeline.a 00:02:25.971 [682/707] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:25.971 [683/707] Linking static target lib/librte_vhost.a 00:02:26.228 [684/707] Linking target app/dpdk-test-acl 00:02:26.228 [685/707] Linking target app/dpdk-test-cmdline 00:02:26.487 [686/707] Linking target app/dpdk-test-fib 00:02:26.487 [687/707] Linking target app/dpdk-proc-info 00:02:26.487 [688/707] Linking target app/dpdk-test-gpudev 00:02:26.487 [689/707] Linking target app/dpdk-test-pipeline 00:02:26.487 [690/707] Linking target app/dpdk-graph 00:02:26.487 [691/707] Linking target app/dpdk-dumpcap 00:02:26.487 [692/707] Linking target app/dpdk-test-security-perf 00:02:26.487 [693/707] Linking target app/dpdk-test-bbdev 00:02:26.487 [694/707] Linking target app/dpdk-test-eventdev 00:02:26.487 [695/707] Linking target app/dpdk-test-sad 00:02:26.487 [696/707] Linking target app/dpdk-pdump 00:02:26.487 [697/707] Linking target app/dpdk-test-dma-perf 00:02:26.487 [698/707] Linking target app/dpdk-test-compress-perf 00:02:26.487 [699/707] Linking target app/dpdk-test-regex 00:02:26.487 [700/707] Linking target app/dpdk-test-flow-perf 00:02:26.487 [701/707] Linking target app/dpdk-test-mldev 00:02:26.487 [702/707] Linking target app/dpdk-test-crypto-perf 00:02:26.487 [703/707] Linking target app/dpdk-testpmd 00:02:27.864 [704/707] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.864 [705/707] Linking target lib/librte_vhost.so.24.0 00:02:30.398 [706/707] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.657 [707/707] Linking target lib/librte_pipeline.so.24.0 00:02:30.657 06:08:22 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:30.657 06:08:22 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:30.657 06:08:22 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:30.657 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:30.657 [0/1] Installing files. 00:02:30.920 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.920 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:30.921 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.922 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.923 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.924 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.925 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:30.925 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.925 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:30.926 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:31.186 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:31.186 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.186 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:31.187 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.187 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:31.187 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.187 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.188 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.189 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:31.190 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:31.190 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:31.190 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:31.190 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:31.191 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:31.191 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:31.191 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:31.191 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:31.191 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:31.191 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:31.191 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:31.191 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:31.191 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:31.191 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:31.191 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:31.191 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:31.191 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:31.191 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:31.191 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:31.191 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:31.191 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:31.191 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:31.191 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:31.191 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:31.191 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:31.191 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:31.191 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:31.191 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:31.191 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:31.191 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:31.191 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:31.191 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:31.191 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:31.191 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:31.191 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:31.191 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:31.191 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:31.191 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:31.191 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:31.191 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:31.191 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:31.191 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:31.191 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:31.191 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:31.191 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:31.191 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:31.191 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:31.191 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:31.191 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:31.191 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:31.191 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:31.191 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:31.191 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:31.191 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:31.191 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:31.191 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:31.191 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:31.191 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:31.191 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:31.191 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:31.191 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:31.191 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:31.191 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:31.191 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:31.191 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:31.191 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:31.191 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:31.191 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:31.191 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:31.191 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:31.191 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:31.191 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:31.191 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:31.191 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:31.191 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:31.191 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:31.191 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:31.191 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:31.191 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:31.191 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:31.191 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:31.191 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:31.191 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:31.191 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:31.191 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:31.191 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:31.192 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:31.192 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:31.192 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:31.192 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:31.192 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:31.192 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:31.192 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:31.192 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:31.192 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:31.192 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:31.192 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:31.192 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:31.192 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:31.192 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:31.192 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:31.192 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:31.192 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:31.192 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:31.192 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:31.192 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:31.192 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:31.192 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:31.192 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:31.192 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:31.192 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:31.192 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:31.192 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:31.192 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:31.192 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:31.192 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:31.192 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:31.192 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:31.192 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:31.192 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:31.192 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:31.192 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:31.192 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:31.192 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:31.192 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:31.192 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:31.192 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:31.192 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:31.192 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:31.192 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:31.192 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:31.192 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:31.192 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:31.192 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:31.451 06:08:22 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:31.451 06:08:22 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:31.451 00:02:31.451 real 0m29.720s 00:02:31.451 user 9m28.654s 00:02:31.451 sys 2m9.057s 00:02:31.451 06:08:22 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:31.451 06:08:22 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:31.451 ************************************ 00:02:31.451 END TEST build_native_dpdk 00:02:31.451 ************************************ 00:02:31.451 06:08:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:31.451 06:08:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:31.451 06:08:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:31.451 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:31.710 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:31.710 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:31.710 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:31.968 Using 'verbs' RDMA provider 00:02:45.110 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:57.313 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:57.313 Creating mk/config.mk...done. 00:02:57.313 Creating mk/cc.flags.mk...done. 00:02:57.313 Type 'make' to build. 00:02:57.313 06:08:48 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:57.313 06:08:48 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:57.313 06:08:48 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:57.313 06:08:48 -- common/autotest_common.sh@10 -- $ set +x 00:02:57.313 ************************************ 00:02:57.313 START TEST make 00:02:57.313 ************************************ 00:02:57.313 06:08:48 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:59.228 The Meson build system 00:02:59.228 Version: 1.5.0 00:02:59.228 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:59.228 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:59.228 Build type: native build 00:02:59.228 Project name: libvfio-user 00:02:59.228 Project version: 0.0.1 00:02:59.228 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:59.228 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:59.228 Host machine cpu family: x86_64 00:02:59.228 Host machine cpu: x86_64 00:02:59.228 Run-time dependency threads found: YES 00:02:59.228 Library dl found: YES 00:02:59.228 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:59.228 Run-time dependency json-c found: YES 0.17 00:02:59.228 Run-time dependency cmocka found: YES 1.1.7 00:02:59.228 Program pytest-3 found: NO 00:02:59.228 Program flake8 found: NO 00:02:59.228 Program misspell-fixer found: NO 00:02:59.228 Program restructuredtext-lint found: NO 00:02:59.228 Program valgrind found: YES (/usr/bin/valgrind) 00:02:59.228 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:59.228 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:59.228 Compiler for C supports arguments -Wwrite-strings: YES 00:02:59.228 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:59.228 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:59.228 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:59.228 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:59.228 Build targets in project: 8 00:02:59.228 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:59.228 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:59.228 00:02:59.228 libvfio-user 0.0.1 00:02:59.228 00:02:59.228 User defined options 00:02:59.228 buildtype : debug 00:02:59.228 default_library: shared 00:02:59.228 libdir : /usr/local/lib 00:02:59.228 00:02:59.228 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:59.793 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:59.793 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:59.793 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:59.793 [3/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:59.793 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:59.793 [5/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:59.793 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:59.793 [7/37] Compiling C object samples/null.p/null.c.o 00:02:59.793 [8/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:59.793 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:59.793 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:59.793 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:59.793 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:59.793 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:59.793 [14/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:59.793 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:59.793 [16/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:59.793 [17/37] Compiling C object samples/server.p/server.c.o 00:02:59.793 [18/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:59.793 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:59.793 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:59.793 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:59.793 [22/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:59.793 [23/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:59.793 [24/37] Compiling C object samples/client.p/client.c.o 00:02:59.793 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:59.793 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:59.793 [27/37] Linking target samples/client 00:03:00.050 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:00.050 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:00.050 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:00.050 [31/37] Linking target test/unit_tests 00:03:00.050 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:00.050 [33/37] Linking target samples/gpio-pci-idio-16 00:03:00.050 [34/37] Linking target samples/null 00:03:00.050 [35/37] Linking target samples/server 00:03:00.050 [36/37] Linking target samples/shadow_ioeventfd_server 00:03:00.050 [37/37] Linking target samples/lspci 00:03:00.050 INFO: autodetecting backend as ninja 00:03:00.050 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:00.308 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:00.566 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:00.566 ninja: no work to do. 00:03:27.192 CC lib/log/log.o 00:03:27.192 CC lib/log/log_flags.o 00:03:27.192 CC lib/ut_mock/mock.o 00:03:27.192 CC lib/log/log_deprecated.o 00:03:27.192 CC lib/ut/ut.o 00:03:27.192 LIB libspdk_ut_mock.a 00:03:27.192 LIB libspdk_log.a 00:03:27.192 LIB libspdk_ut.a 00:03:27.192 SO libspdk_ut_mock.so.6.0 00:03:27.192 SO libspdk_log.so.7.1 00:03:27.192 SO libspdk_ut.so.2.0 00:03:27.192 SYMLINK libspdk_ut_mock.so 00:03:27.192 SYMLINK libspdk_log.so 00:03:27.192 SYMLINK libspdk_ut.so 00:03:27.811 CXX lib/trace_parser/trace.o 00:03:27.811 CC lib/ioat/ioat.o 00:03:27.811 CC lib/dma/dma.o 00:03:27.811 CC lib/util/base64.o 00:03:27.811 CC lib/util/bit_array.o 00:03:27.811 CC lib/util/cpuset.o 00:03:27.811 CC lib/util/crc16.o 00:03:27.811 CC lib/util/crc32.o 00:03:27.811 CC lib/util/crc32c.o 00:03:27.811 CC lib/util/crc32_ieee.o 00:03:27.811 CC lib/util/crc64.o 00:03:27.811 CC lib/util/dif.o 00:03:27.811 CC lib/util/fd.o 00:03:27.811 CC lib/util/fd_group.o 00:03:27.811 CC lib/util/file.o 00:03:27.811 CC lib/util/hexlify.o 00:03:27.811 CC lib/util/iov.o 00:03:27.811 CC lib/util/math.o 00:03:27.811 CC lib/util/net.o 00:03:27.811 CC lib/util/pipe.o 00:03:27.811 CC lib/util/strerror_tls.o 00:03:27.811 CC lib/util/string.o 00:03:27.811 CC lib/util/uuid.o 00:03:27.811 CC lib/util/xor.o 00:03:27.811 CC lib/util/zipf.o 00:03:27.811 CC lib/util/md5.o 00:03:27.811 CC lib/vfio_user/host/vfio_user.o 00:03:27.811 CC lib/vfio_user/host/vfio_user_pci.o 00:03:27.811 LIB libspdk_dma.a 00:03:27.811 SO libspdk_dma.so.5.0 00:03:28.070 LIB libspdk_ioat.a 00:03:28.070 SYMLINK libspdk_dma.so 00:03:28.070 SO libspdk_ioat.so.7.0 00:03:28.070 SYMLINK libspdk_ioat.so 00:03:28.070 LIB libspdk_vfio_user.a 00:03:28.070 SO libspdk_vfio_user.so.5.0 00:03:28.070 SYMLINK libspdk_vfio_user.so 00:03:28.070 LIB libspdk_util.a 00:03:28.329 SO libspdk_util.so.10.1 00:03:28.329 SYMLINK libspdk_util.so 00:03:28.329 LIB libspdk_trace_parser.a 00:03:28.329 SO libspdk_trace_parser.so.6.0 00:03:28.588 SYMLINK libspdk_trace_parser.so 00:03:28.846 CC lib/rdma_utils/rdma_utils.o 00:03:28.846 CC lib/idxd/idxd.o 00:03:28.846 CC lib/idxd/idxd_user.o 00:03:28.846 CC lib/conf/conf.o 00:03:28.846 CC lib/idxd/idxd_kernel.o 00:03:28.846 CC lib/json/json_parse.o 00:03:28.846 CC lib/json/json_util.o 00:03:28.846 CC lib/json/json_write.o 00:03:28.846 CC lib/vmd/vmd.o 00:03:28.846 CC lib/env_dpdk/env.o 00:03:28.846 CC lib/vmd/led.o 00:03:28.846 CC lib/env_dpdk/memory.o 00:03:28.846 CC lib/env_dpdk/pci.o 00:03:28.846 CC lib/env_dpdk/init.o 00:03:28.846 CC lib/env_dpdk/threads.o 00:03:28.846 CC lib/env_dpdk/pci_ioat.o 00:03:28.846 CC lib/env_dpdk/pci_virtio.o 00:03:28.846 CC lib/env_dpdk/pci_vmd.o 00:03:28.846 CC lib/env_dpdk/pci_idxd.o 00:03:28.846 CC lib/env_dpdk/pci_event.o 00:03:28.846 CC lib/env_dpdk/sigbus_handler.o 00:03:28.846 CC lib/env_dpdk/pci_dpdk.o 00:03:28.846 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:28.846 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:28.846 LIB libspdk_conf.a 00:03:29.104 SO libspdk_conf.so.6.0 00:03:29.104 LIB libspdk_rdma_utils.a 00:03:29.104 LIB libspdk_json.a 00:03:29.104 SO libspdk_rdma_utils.so.1.0 00:03:29.104 SYMLINK libspdk_conf.so 00:03:29.104 SO libspdk_json.so.6.0 00:03:29.104 SYMLINK libspdk_rdma_utils.so 00:03:29.104 SYMLINK libspdk_json.so 00:03:29.104 LIB libspdk_idxd.a 00:03:29.363 SO libspdk_idxd.so.12.1 00:03:29.363 LIB libspdk_vmd.a 00:03:29.363 SO libspdk_vmd.so.6.0 00:03:29.363 SYMLINK libspdk_idxd.so 00:03:29.363 SYMLINK libspdk_vmd.so 00:03:29.363 CC lib/rdma_provider/common.o 00:03:29.363 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:29.363 CC lib/jsonrpc/jsonrpc_server.o 00:03:29.363 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:29.363 CC lib/jsonrpc/jsonrpc_client.o 00:03:29.363 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:29.622 LIB libspdk_rdma_provider.a 00:03:29.622 SO libspdk_rdma_provider.so.7.0 00:03:29.622 LIB libspdk_jsonrpc.a 00:03:29.622 SO libspdk_jsonrpc.so.6.0 00:03:29.622 SYMLINK libspdk_rdma_provider.so 00:03:29.881 SYMLINK libspdk_jsonrpc.so 00:03:29.881 LIB libspdk_env_dpdk.a 00:03:29.881 SO libspdk_env_dpdk.so.15.1 00:03:29.881 SYMLINK libspdk_env_dpdk.so 00:03:30.140 CC lib/rpc/rpc.o 00:03:30.399 LIB libspdk_rpc.a 00:03:30.399 SO libspdk_rpc.so.6.0 00:03:30.399 SYMLINK libspdk_rpc.so 00:03:30.657 CC lib/trace/trace.o 00:03:30.657 CC lib/trace/trace_flags.o 00:03:30.657 CC lib/trace/trace_rpc.o 00:03:30.916 CC lib/notify/notify.o 00:03:30.916 CC lib/notify/notify_rpc.o 00:03:30.916 CC lib/keyring/keyring.o 00:03:30.916 CC lib/keyring/keyring_rpc.o 00:03:30.916 LIB libspdk_notify.a 00:03:30.916 LIB libspdk_trace.a 00:03:30.916 SO libspdk_notify.so.6.0 00:03:30.916 LIB libspdk_keyring.a 00:03:30.916 SO libspdk_trace.so.11.0 00:03:30.916 SYMLINK libspdk_notify.so 00:03:30.916 SO libspdk_keyring.so.2.0 00:03:31.175 SYMLINK libspdk_trace.so 00:03:31.175 SYMLINK libspdk_keyring.so 00:03:31.433 CC lib/thread/thread.o 00:03:31.433 CC lib/thread/iobuf.o 00:03:31.433 CC lib/sock/sock.o 00:03:31.433 CC lib/sock/sock_rpc.o 00:03:31.722 LIB libspdk_sock.a 00:03:31.722 SO libspdk_sock.so.10.0 00:03:31.981 SYMLINK libspdk_sock.so 00:03:32.240 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:32.240 CC lib/nvme/nvme_ctrlr.o 00:03:32.240 CC lib/nvme/nvme_fabric.o 00:03:32.240 CC lib/nvme/nvme_ns_cmd.o 00:03:32.240 CC lib/nvme/nvme_ns.o 00:03:32.240 CC lib/nvme/nvme_pcie_common.o 00:03:32.240 CC lib/nvme/nvme_pcie.o 00:03:32.240 CC lib/nvme/nvme_qpair.o 00:03:32.240 CC lib/nvme/nvme.o 00:03:32.240 CC lib/nvme/nvme_quirks.o 00:03:32.240 CC lib/nvme/nvme_transport.o 00:03:32.240 CC lib/nvme/nvme_discovery.o 00:03:32.240 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:32.240 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:32.240 CC lib/nvme/nvme_tcp.o 00:03:32.240 CC lib/nvme/nvme_opal.o 00:03:32.240 CC lib/nvme/nvme_io_msg.o 00:03:32.240 CC lib/nvme/nvme_poll_group.o 00:03:32.240 CC lib/nvme/nvme_zns.o 00:03:32.240 CC lib/nvme/nvme_stubs.o 00:03:32.240 CC lib/nvme/nvme_auth.o 00:03:32.240 CC lib/nvme/nvme_cuse.o 00:03:32.240 CC lib/nvme/nvme_vfio_user.o 00:03:32.240 CC lib/nvme/nvme_rdma.o 00:03:32.498 LIB libspdk_thread.a 00:03:32.498 SO libspdk_thread.so.11.0 00:03:32.498 SYMLINK libspdk_thread.so 00:03:33.064 CC lib/fsdev/fsdev_io.o 00:03:33.064 CC lib/fsdev/fsdev.o 00:03:33.064 CC lib/fsdev/fsdev_rpc.o 00:03:33.064 CC lib/accel/accel.o 00:03:33.064 CC lib/accel/accel_rpc.o 00:03:33.064 CC lib/blob/zeroes.o 00:03:33.064 CC lib/blob/blobstore.o 00:03:33.064 CC lib/accel/accel_sw.o 00:03:33.064 CC lib/blob/blob_bs_dev.o 00:03:33.064 CC lib/vfu_tgt/tgt_endpoint.o 00:03:33.064 CC lib/blob/request.o 00:03:33.064 CC lib/vfu_tgt/tgt_rpc.o 00:03:33.064 CC lib/virtio/virtio.o 00:03:33.064 CC lib/virtio/virtio_vhost_user.o 00:03:33.064 CC lib/virtio/virtio_vfio_user.o 00:03:33.064 CC lib/virtio/virtio_pci.o 00:03:33.064 CC lib/init/json_config.o 00:03:33.064 CC lib/init/subsystem.o 00:03:33.064 CC lib/init/subsystem_rpc.o 00:03:33.064 CC lib/init/rpc.o 00:03:33.323 LIB libspdk_init.a 00:03:33.323 SO libspdk_init.so.6.0 00:03:33.323 LIB libspdk_virtio.a 00:03:33.323 LIB libspdk_vfu_tgt.a 00:03:33.323 SO libspdk_virtio.so.7.0 00:03:33.323 SO libspdk_vfu_tgt.so.3.0 00:03:33.323 SYMLINK libspdk_init.so 00:03:33.323 SYMLINK libspdk_virtio.so 00:03:33.323 SYMLINK libspdk_vfu_tgt.so 00:03:33.584 LIB libspdk_fsdev.a 00:03:33.584 SO libspdk_fsdev.so.2.0 00:03:33.584 SYMLINK libspdk_fsdev.so 00:03:33.584 CC lib/event/app.o 00:03:33.584 CC lib/event/reactor.o 00:03:33.584 CC lib/event/log_rpc.o 00:03:33.584 CC lib/event/app_rpc.o 00:03:33.584 CC lib/event/scheduler_static.o 00:03:33.843 LIB libspdk_accel.a 00:03:33.843 SO libspdk_accel.so.16.0 00:03:33.843 SYMLINK libspdk_accel.so 00:03:33.843 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:33.843 LIB libspdk_nvme.a 00:03:34.101 LIB libspdk_event.a 00:03:34.101 SO libspdk_event.so.14.0 00:03:34.101 SO libspdk_nvme.so.15.0 00:03:34.101 SYMLINK libspdk_event.so 00:03:34.359 CC lib/bdev/bdev.o 00:03:34.359 CC lib/bdev/bdev_rpc.o 00:03:34.359 CC lib/bdev/bdev_zone.o 00:03:34.359 CC lib/bdev/part.o 00:03:34.359 CC lib/bdev/scsi_nvme.o 00:03:34.359 SYMLINK libspdk_nvme.so 00:03:34.359 LIB libspdk_fuse_dispatcher.a 00:03:34.359 SO libspdk_fuse_dispatcher.so.1.0 00:03:34.359 SYMLINK libspdk_fuse_dispatcher.so 00:03:35.295 LIB libspdk_blob.a 00:03:35.295 SO libspdk_blob.so.12.0 00:03:35.295 SYMLINK libspdk_blob.so 00:03:35.554 CC lib/blobfs/blobfs.o 00:03:35.554 CC lib/blobfs/tree.o 00:03:35.554 CC lib/lvol/lvol.o 00:03:36.121 LIB libspdk_bdev.a 00:03:36.121 SO libspdk_bdev.so.17.0 00:03:36.121 LIB libspdk_blobfs.a 00:03:36.121 SYMLINK libspdk_bdev.so 00:03:36.121 SO libspdk_blobfs.so.11.0 00:03:36.380 LIB libspdk_lvol.a 00:03:36.380 SYMLINK libspdk_blobfs.so 00:03:36.380 SO libspdk_lvol.so.11.0 00:03:36.380 SYMLINK libspdk_lvol.so 00:03:36.641 CC lib/nvmf/ctrlr.o 00:03:36.641 CC lib/nvmf/ctrlr_discovery.o 00:03:36.641 CC lib/nvmf/ctrlr_bdev.o 00:03:36.641 CC lib/nvmf/subsystem.o 00:03:36.641 CC lib/nvmf/nvmf.o 00:03:36.641 CC lib/nvmf/nvmf_rpc.o 00:03:36.641 CC lib/nbd/nbd.o 00:03:36.641 CC lib/nvmf/transport.o 00:03:36.641 CC lib/nbd/nbd_rpc.o 00:03:36.641 CC lib/scsi/dev.o 00:03:36.641 CC lib/ublk/ublk.o 00:03:36.641 CC lib/scsi/lun.o 00:03:36.641 CC lib/nvmf/tcp.o 00:03:36.641 CC lib/nvmf/stubs.o 00:03:36.641 CC lib/ublk/ublk_rpc.o 00:03:36.641 CC lib/scsi/port.o 00:03:36.641 CC lib/nvmf/mdns_server.o 00:03:36.641 CC lib/scsi/scsi.o 00:03:36.641 CC lib/scsi/scsi_bdev.o 00:03:36.641 CC lib/nvmf/vfio_user.o 00:03:36.641 CC lib/ftl/ftl_core.o 00:03:36.641 CC lib/scsi/scsi_pr.o 00:03:36.641 CC lib/ftl/ftl_init.o 00:03:36.641 CC lib/nvmf/rdma.o 00:03:36.641 CC lib/scsi/scsi_rpc.o 00:03:36.641 CC lib/ftl/ftl_layout.o 00:03:36.641 CC lib/scsi/task.o 00:03:36.641 CC lib/nvmf/auth.o 00:03:36.641 CC lib/ftl/ftl_debug.o 00:03:36.641 CC lib/ftl/ftl_io.o 00:03:36.641 CC lib/ftl/ftl_sb.o 00:03:36.641 CC lib/ftl/ftl_l2p.o 00:03:36.641 CC lib/ftl/ftl_l2p_flat.o 00:03:36.641 CC lib/ftl/ftl_nv_cache.o 00:03:36.641 CC lib/ftl/ftl_band.o 00:03:36.641 CC lib/ftl/ftl_band_ops.o 00:03:36.641 CC lib/ftl/ftl_writer.o 00:03:36.641 CC lib/ftl/ftl_rq.o 00:03:36.641 CC lib/ftl/ftl_reloc.o 00:03:36.641 CC lib/ftl/ftl_l2p_cache.o 00:03:36.641 CC lib/ftl/ftl_p2l.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt.o 00:03:36.641 CC lib/ftl/ftl_p2l_log.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.641 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.641 CC lib/ftl/utils/ftl_md.o 00:03:36.641 CC lib/ftl/utils/ftl_conf.o 00:03:36.641 CC lib/ftl/utils/ftl_mempool.o 00:03:36.641 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.641 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.641 CC lib/ftl/utils/ftl_property.o 00:03:36.641 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:36.641 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.641 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.641 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.641 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.641 CC lib/ftl/base/ftl_base_dev.o 00:03:36.641 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:36.641 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:36.641 CC lib/ftl/ftl_trace.o 00:03:36.641 CC lib/ftl/base/ftl_base_bdev.o 00:03:37.208 LIB libspdk_scsi.a 00:03:37.208 LIB libspdk_nbd.a 00:03:37.208 SO libspdk_scsi.so.9.0 00:03:37.208 SO libspdk_nbd.so.7.0 00:03:37.466 LIB libspdk_ublk.a 00:03:37.466 SYMLINK libspdk_scsi.so 00:03:37.466 SYMLINK libspdk_nbd.so 00:03:37.466 SO libspdk_ublk.so.3.0 00:03:37.466 SYMLINK libspdk_ublk.so 00:03:37.725 LIB libspdk_ftl.a 00:03:37.725 CC lib/iscsi/conn.o 00:03:37.725 CC lib/iscsi/init_grp.o 00:03:37.725 CC lib/iscsi/param.o 00:03:37.725 CC lib/iscsi/iscsi.o 00:03:37.725 CC lib/vhost/vhost.o 00:03:37.725 CC lib/vhost/vhost_rpc.o 00:03:37.725 CC lib/iscsi/tgt_node.o 00:03:37.725 CC lib/iscsi/portal_grp.o 00:03:37.725 CC lib/vhost/vhost_scsi.o 00:03:37.725 CC lib/vhost/vhost_blk.o 00:03:37.725 CC lib/iscsi/iscsi_subsystem.o 00:03:37.725 CC lib/vhost/rte_vhost_user.o 00:03:37.725 CC lib/iscsi/iscsi_rpc.o 00:03:37.725 CC lib/iscsi/task.o 00:03:37.725 SO libspdk_ftl.so.9.0 00:03:37.983 SYMLINK libspdk_ftl.so 00:03:38.241 LIB libspdk_nvmf.a 00:03:38.498 SO libspdk_nvmf.so.20.0 00:03:38.498 LIB libspdk_vhost.a 00:03:38.498 SO libspdk_vhost.so.8.0 00:03:38.498 SYMLINK libspdk_nvmf.so 00:03:38.757 SYMLINK libspdk_vhost.so 00:03:38.757 LIB libspdk_iscsi.a 00:03:38.757 SO libspdk_iscsi.so.8.0 00:03:38.757 SYMLINK libspdk_iscsi.so 00:03:39.324 CC module/env_dpdk/env_dpdk_rpc.o 00:03:39.324 CC module/vfu_device/vfu_virtio.o 00:03:39.324 CC module/vfu_device/vfu_virtio_blk.o 00:03:39.324 CC module/vfu_device/vfu_virtio_scsi.o 00:03:39.583 CC module/vfu_device/vfu_virtio_rpc.o 00:03:39.583 CC module/vfu_device/vfu_virtio_fs.o 00:03:39.583 LIB libspdk_env_dpdk_rpc.a 00:03:39.583 CC module/blob/bdev/blob_bdev.o 00:03:39.583 CC module/accel/iaa/accel_iaa.o 00:03:39.583 CC module/accel/iaa/accel_iaa_rpc.o 00:03:39.583 CC module/scheduler/gscheduler/gscheduler.o 00:03:39.583 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:39.583 CC module/keyring/file/keyring.o 00:03:39.583 CC module/keyring/file/keyring_rpc.o 00:03:39.583 CC module/accel/dsa/accel_dsa.o 00:03:39.583 CC module/accel/dsa/accel_dsa_rpc.o 00:03:39.583 CC module/fsdev/aio/fsdev_aio.o 00:03:39.583 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:39.583 CC module/keyring/linux/keyring.o 00:03:39.583 CC module/keyring/linux/keyring_rpc.o 00:03:39.583 CC module/fsdev/aio/linux_aio_mgr.o 00:03:39.583 CC module/sock/posix/posix.o 00:03:39.583 CC module/accel/error/accel_error.o 00:03:39.583 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:39.583 CC module/accel/error/accel_error_rpc.o 00:03:39.583 SO libspdk_env_dpdk_rpc.so.6.0 00:03:39.583 CC module/accel/ioat/accel_ioat.o 00:03:39.583 CC module/accel/ioat/accel_ioat_rpc.o 00:03:39.583 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.841 LIB libspdk_scheduler_gscheduler.a 00:03:39.841 LIB libspdk_scheduler_dpdk_governor.a 00:03:39.841 LIB libspdk_keyring_linux.a 00:03:39.841 LIB libspdk_keyring_file.a 00:03:39.841 SO libspdk_scheduler_gscheduler.so.4.0 00:03:39.841 LIB libspdk_accel_ioat.a 00:03:39.841 LIB libspdk_scheduler_dynamic.a 00:03:39.841 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:39.841 LIB libspdk_accel_iaa.a 00:03:39.841 SO libspdk_keyring_linux.so.1.0 00:03:39.841 SO libspdk_keyring_file.so.2.0 00:03:39.841 LIB libspdk_accel_error.a 00:03:39.841 SO libspdk_accel_ioat.so.6.0 00:03:39.841 SO libspdk_scheduler_dynamic.so.4.0 00:03:39.841 SO libspdk_accel_iaa.so.3.0 00:03:39.841 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.841 SO libspdk_accel_error.so.2.0 00:03:39.841 SYMLINK libspdk_keyring_file.so 00:03:39.841 LIB libspdk_blob_bdev.a 00:03:39.841 SYMLINK libspdk_keyring_linux.so 00:03:39.841 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.841 LIB libspdk_accel_dsa.a 00:03:39.841 SO libspdk_blob_bdev.so.12.0 00:03:39.841 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.841 SYMLINK libspdk_accel_iaa.so 00:03:39.841 SYMLINK libspdk_accel_ioat.so 00:03:39.841 SYMLINK libspdk_accel_error.so 00:03:39.841 SO libspdk_accel_dsa.so.5.0 00:03:40.100 SYMLINK libspdk_blob_bdev.so 00:03:40.100 LIB libspdk_vfu_device.a 00:03:40.100 SYMLINK libspdk_accel_dsa.so 00:03:40.100 SO libspdk_vfu_device.so.3.0 00:03:40.100 SYMLINK libspdk_vfu_device.so 00:03:40.100 LIB libspdk_fsdev_aio.a 00:03:40.358 SO libspdk_fsdev_aio.so.1.0 00:03:40.358 LIB libspdk_sock_posix.a 00:03:40.358 SO libspdk_sock_posix.so.6.0 00:03:40.358 SYMLINK libspdk_fsdev_aio.so 00:03:40.358 SYMLINK libspdk_sock_posix.so 00:03:40.617 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:40.617 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:40.617 CC module/bdev/delay/vbdev_delay.o 00:03:40.617 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:40.617 CC module/bdev/split/vbdev_split_rpc.o 00:03:40.617 CC module/bdev/split/vbdev_split.o 00:03:40.617 CC module/bdev/error/vbdev_error.o 00:03:40.617 CC module/bdev/error/vbdev_error_rpc.o 00:03:40.617 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:40.617 CC module/bdev/gpt/gpt.o 00:03:40.617 CC module/bdev/malloc/bdev_malloc.o 00:03:40.617 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:40.617 CC module/bdev/gpt/vbdev_gpt.o 00:03:40.617 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:40.617 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:40.617 CC module/bdev/nvme/bdev_nvme.o 00:03:40.617 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:40.617 CC module/bdev/passthru/vbdev_passthru.o 00:03:40.617 CC module/blobfs/bdev/blobfs_bdev.o 00:03:40.617 CC module/bdev/nvme/nvme_rpc.o 00:03:40.617 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:40.617 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:40.617 CC module/bdev/nvme/vbdev_opal.o 00:03:40.617 CC module/bdev/nvme/bdev_mdns_client.o 00:03:40.617 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:40.617 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:40.617 CC module/bdev/null/bdev_null.o 00:03:40.617 CC module/bdev/null/bdev_null_rpc.o 00:03:40.617 CC module/bdev/iscsi/bdev_iscsi.o 00:03:40.617 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:40.617 CC module/bdev/lvol/vbdev_lvol.o 00:03:40.617 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:40.617 CC module/bdev/raid/bdev_raid.o 00:03:40.617 CC module/bdev/raid/bdev_raid_rpc.o 00:03:40.617 CC module/bdev/raid/bdev_raid_sb.o 00:03:40.617 CC module/bdev/aio/bdev_aio.o 00:03:40.617 CC module/bdev/aio/bdev_aio_rpc.o 00:03:40.617 CC module/bdev/raid/raid0.o 00:03:40.617 CC module/bdev/raid/raid1.o 00:03:40.617 CC module/bdev/raid/concat.o 00:03:40.617 CC module/bdev/ftl/bdev_ftl.o 00:03:40.617 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:40.875 LIB libspdk_blobfs_bdev.a 00:03:40.875 LIB libspdk_bdev_split.a 00:03:40.875 SO libspdk_blobfs_bdev.so.6.0 00:03:40.875 LIB libspdk_bdev_error.a 00:03:40.875 SO libspdk_bdev_split.so.6.0 00:03:40.875 SO libspdk_bdev_error.so.6.0 00:03:40.875 SYMLINK libspdk_blobfs_bdev.so 00:03:40.875 LIB libspdk_bdev_gpt.a 00:03:40.875 LIB libspdk_bdev_passthru.a 00:03:40.875 SYMLINK libspdk_bdev_split.so 00:03:40.875 LIB libspdk_bdev_null.a 00:03:40.875 LIB libspdk_bdev_zone_block.a 00:03:40.875 LIB libspdk_bdev_ftl.a 00:03:40.875 SO libspdk_bdev_passthru.so.6.0 00:03:40.875 SYMLINK libspdk_bdev_error.so 00:03:40.875 SO libspdk_bdev_gpt.so.6.0 00:03:40.875 LIB libspdk_bdev_delay.a 00:03:40.875 SO libspdk_bdev_null.so.6.0 00:03:40.875 LIB libspdk_bdev_malloc.a 00:03:40.875 SO libspdk_bdev_ftl.so.6.0 00:03:40.875 SO libspdk_bdev_zone_block.so.6.0 00:03:40.875 LIB libspdk_bdev_aio.a 00:03:40.875 SO libspdk_bdev_delay.so.6.0 00:03:40.875 SO libspdk_bdev_malloc.so.6.0 00:03:40.875 SYMLINK libspdk_bdev_gpt.so 00:03:40.875 SYMLINK libspdk_bdev_null.so 00:03:40.875 LIB libspdk_bdev_iscsi.a 00:03:40.875 SYMLINK libspdk_bdev_passthru.so 00:03:41.134 SO libspdk_bdev_aio.so.6.0 00:03:41.134 SYMLINK libspdk_bdev_zone_block.so 00:03:41.134 SYMLINK libspdk_bdev_ftl.so 00:03:41.134 SYMLINK libspdk_bdev_delay.so 00:03:41.134 SO libspdk_bdev_iscsi.so.6.0 00:03:41.134 SYMLINK libspdk_bdev_malloc.so 00:03:41.134 LIB libspdk_bdev_virtio.a 00:03:41.134 SYMLINK libspdk_bdev_aio.so 00:03:41.134 LIB libspdk_bdev_lvol.a 00:03:41.134 SO libspdk_bdev_virtio.so.6.0 00:03:41.134 SYMLINK libspdk_bdev_iscsi.so 00:03:41.134 SO libspdk_bdev_lvol.so.6.0 00:03:41.134 SYMLINK libspdk_bdev_virtio.so 00:03:41.134 SYMLINK libspdk_bdev_lvol.so 00:03:41.392 LIB libspdk_bdev_raid.a 00:03:41.392 SO libspdk_bdev_raid.so.6.0 00:03:41.651 SYMLINK libspdk_bdev_raid.so 00:03:42.586 LIB libspdk_bdev_nvme.a 00:03:42.586 SO libspdk_bdev_nvme.so.7.1 00:03:42.586 SYMLINK libspdk_bdev_nvme.so 00:03:43.524 CC module/event/subsystems/iobuf/iobuf.o 00:03:43.524 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:43.524 CC module/event/subsystems/sock/sock.o 00:03:43.524 CC module/event/subsystems/vmd/vmd.o 00:03:43.524 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:43.524 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:43.524 CC module/event/subsystems/keyring/keyring.o 00:03:43.524 CC module/event/subsystems/fsdev/fsdev.o 00:03:43.524 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:43.524 CC module/event/subsystems/scheduler/scheduler.o 00:03:43.524 LIB libspdk_event_fsdev.a 00:03:43.524 LIB libspdk_event_keyring.a 00:03:43.524 LIB libspdk_event_sock.a 00:03:43.524 LIB libspdk_event_vmd.a 00:03:43.524 LIB libspdk_event_scheduler.a 00:03:43.524 LIB libspdk_event_vhost_blk.a 00:03:43.524 LIB libspdk_event_iobuf.a 00:03:43.524 LIB libspdk_event_vfu_tgt.a 00:03:43.524 SO libspdk_event_fsdev.so.1.0 00:03:43.524 SO libspdk_event_vmd.so.6.0 00:03:43.524 SO libspdk_event_sock.so.5.0 00:03:43.524 SO libspdk_event_keyring.so.1.0 00:03:43.524 SO libspdk_event_scheduler.so.4.0 00:03:43.524 SO libspdk_event_vhost_blk.so.3.0 00:03:43.524 SO libspdk_event_iobuf.so.3.0 00:03:43.524 SO libspdk_event_vfu_tgt.so.3.0 00:03:43.524 SYMLINK libspdk_event_fsdev.so 00:03:43.524 SYMLINK libspdk_event_sock.so 00:03:43.524 SYMLINK libspdk_event_keyring.so 00:03:43.524 SYMLINK libspdk_event_scheduler.so 00:03:43.524 SYMLINK libspdk_event_vmd.so 00:03:43.524 SYMLINK libspdk_event_iobuf.so 00:03:43.524 SYMLINK libspdk_event_vhost_blk.so 00:03:43.524 SYMLINK libspdk_event_vfu_tgt.so 00:03:44.092 CC module/event/subsystems/accel/accel.o 00:03:44.093 LIB libspdk_event_accel.a 00:03:44.093 SO libspdk_event_accel.so.6.0 00:03:44.093 SYMLINK libspdk_event_accel.so 00:03:44.660 CC module/event/subsystems/bdev/bdev.o 00:03:44.660 LIB libspdk_event_bdev.a 00:03:44.660 SO libspdk_event_bdev.so.6.0 00:03:44.660 SYMLINK libspdk_event_bdev.so 00:03:45.228 CC module/event/subsystems/ublk/ublk.o 00:03:45.228 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:45.228 CC module/event/subsystems/nbd/nbd.o 00:03:45.228 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:45.228 CC module/event/subsystems/scsi/scsi.o 00:03:45.228 LIB libspdk_event_ublk.a 00:03:45.228 LIB libspdk_event_nbd.a 00:03:45.228 LIB libspdk_event_scsi.a 00:03:45.228 SO libspdk_event_ublk.so.3.0 00:03:45.228 SO libspdk_event_nbd.so.6.0 00:03:45.228 SO libspdk_event_scsi.so.6.0 00:03:45.228 LIB libspdk_event_nvmf.a 00:03:45.488 SYMLINK libspdk_event_ublk.so 00:03:45.488 SYMLINK libspdk_event_nbd.so 00:03:45.488 SO libspdk_event_nvmf.so.6.0 00:03:45.488 SYMLINK libspdk_event_scsi.so 00:03:45.488 SYMLINK libspdk_event_nvmf.so 00:03:45.747 CC module/event/subsystems/iscsi/iscsi.o 00:03:45.747 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:46.006 LIB libspdk_event_vhost_scsi.a 00:03:46.006 LIB libspdk_event_iscsi.a 00:03:46.006 SO libspdk_event_vhost_scsi.so.3.0 00:03:46.006 SO libspdk_event_iscsi.so.6.0 00:03:46.006 SYMLINK libspdk_event_vhost_scsi.so 00:03:46.006 SYMLINK libspdk_event_iscsi.so 00:03:46.265 SO libspdk.so.6.0 00:03:46.265 SYMLINK libspdk.so 00:03:46.523 CXX app/trace/trace.o 00:03:46.523 CC app/spdk_nvme_perf/perf.o 00:03:46.523 CC test/rpc_client/rpc_client_test.o 00:03:46.523 CC app/spdk_top/spdk_top.o 00:03:46.523 CC app/spdk_nvme_identify/identify.o 00:03:46.523 CC app/trace_record/trace_record.o 00:03:46.523 CC app/spdk_lspci/spdk_lspci.o 00:03:46.523 CC app/spdk_nvme_discover/discovery_aer.o 00:03:46.523 TEST_HEADER include/spdk/accel.h 00:03:46.524 TEST_HEADER include/spdk/accel_module.h 00:03:46.524 TEST_HEADER include/spdk/assert.h 00:03:46.524 TEST_HEADER include/spdk/barrier.h 00:03:46.524 TEST_HEADER include/spdk/base64.h 00:03:46.524 TEST_HEADER include/spdk/bdev.h 00:03:46.524 TEST_HEADER include/spdk/bdev_module.h 00:03:46.524 TEST_HEADER include/spdk/bdev_zone.h 00:03:46.524 TEST_HEADER include/spdk/bit_array.h 00:03:46.524 TEST_HEADER include/spdk/blob_bdev.h 00:03:46.524 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:46.524 TEST_HEADER include/spdk/bit_pool.h 00:03:46.524 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:46.524 TEST_HEADER include/spdk/conf.h 00:03:46.524 TEST_HEADER include/spdk/blobfs.h 00:03:46.524 TEST_HEADER include/spdk/blob.h 00:03:46.524 TEST_HEADER include/spdk/config.h 00:03:46.524 TEST_HEADER include/spdk/cpuset.h 00:03:46.524 TEST_HEADER include/spdk/crc16.h 00:03:46.524 TEST_HEADER include/spdk/crc64.h 00:03:46.524 TEST_HEADER include/spdk/crc32.h 00:03:46.524 TEST_HEADER include/spdk/dma.h 00:03:46.524 TEST_HEADER include/spdk/dif.h 00:03:46.524 TEST_HEADER include/spdk/endian.h 00:03:46.524 TEST_HEADER include/spdk/env_dpdk.h 00:03:46.524 TEST_HEADER include/spdk/env.h 00:03:46.524 TEST_HEADER include/spdk/event.h 00:03:46.524 TEST_HEADER include/spdk/fd_group.h 00:03:46.524 TEST_HEADER include/spdk/fd.h 00:03:46.524 TEST_HEADER include/spdk/file.h 00:03:46.524 TEST_HEADER include/spdk/fsdev_module.h 00:03:46.524 TEST_HEADER include/spdk/ftl.h 00:03:46.524 TEST_HEADER include/spdk/fsdev.h 00:03:46.524 TEST_HEADER include/spdk/gpt_spec.h 00:03:46.524 TEST_HEADER include/spdk/histogram_data.h 00:03:46.524 TEST_HEADER include/spdk/hexlify.h 00:03:46.524 CC app/nvmf_tgt/nvmf_main.o 00:03:46.524 TEST_HEADER include/spdk/idxd_spec.h 00:03:46.524 TEST_HEADER include/spdk/idxd.h 00:03:46.524 TEST_HEADER include/spdk/ioat.h 00:03:46.524 TEST_HEADER include/spdk/init.h 00:03:46.524 TEST_HEADER include/spdk/ioat_spec.h 00:03:46.524 TEST_HEADER include/spdk/json.h 00:03:46.524 TEST_HEADER include/spdk/iscsi_spec.h 00:03:46.524 CC app/spdk_dd/spdk_dd.o 00:03:46.524 TEST_HEADER include/spdk/jsonrpc.h 00:03:46.524 TEST_HEADER include/spdk/keyring.h 00:03:46.524 TEST_HEADER include/spdk/keyring_module.h 00:03:46.524 TEST_HEADER include/spdk/likely.h 00:03:46.524 TEST_HEADER include/spdk/lvol.h 00:03:46.524 TEST_HEADER include/spdk/log.h 00:03:46.524 TEST_HEADER include/spdk/md5.h 00:03:46.524 TEST_HEADER include/spdk/mmio.h 00:03:46.524 TEST_HEADER include/spdk/nbd.h 00:03:46.524 TEST_HEADER include/spdk/memory.h 00:03:46.524 CC app/iscsi_tgt/iscsi_tgt.o 00:03:46.524 TEST_HEADER include/spdk/notify.h 00:03:46.524 TEST_HEADER include/spdk/net.h 00:03:46.524 TEST_HEADER include/spdk/nvme.h 00:03:46.524 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:46.524 TEST_HEADER include/spdk/nvme_intel.h 00:03:46.524 TEST_HEADER include/spdk/nvme_spec.h 00:03:46.524 TEST_HEADER include/spdk/nvme_zns.h 00:03:46.524 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:46.524 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:46.524 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:46.524 TEST_HEADER include/spdk/nvmf.h 00:03:46.524 TEST_HEADER include/spdk/nvmf_spec.h 00:03:46.524 TEST_HEADER include/spdk/nvmf_transport.h 00:03:46.524 TEST_HEADER include/spdk/opal_spec.h 00:03:46.524 TEST_HEADER include/spdk/opal.h 00:03:46.524 TEST_HEADER include/spdk/pipe.h 00:03:46.524 TEST_HEADER include/spdk/pci_ids.h 00:03:46.524 TEST_HEADER include/spdk/queue.h 00:03:46.790 TEST_HEADER include/spdk/rpc.h 00:03:46.790 TEST_HEADER include/spdk/scheduler.h 00:03:46.790 TEST_HEADER include/spdk/reduce.h 00:03:46.790 TEST_HEADER include/spdk/scsi_spec.h 00:03:46.790 TEST_HEADER include/spdk/stdinc.h 00:03:46.790 TEST_HEADER include/spdk/sock.h 00:03:46.790 TEST_HEADER include/spdk/scsi.h 00:03:46.790 TEST_HEADER include/spdk/string.h 00:03:46.790 TEST_HEADER include/spdk/trace.h 00:03:46.790 TEST_HEADER include/spdk/trace_parser.h 00:03:46.790 TEST_HEADER include/spdk/thread.h 00:03:46.790 TEST_HEADER include/spdk/tree.h 00:03:46.790 TEST_HEADER include/spdk/ublk.h 00:03:46.790 TEST_HEADER include/spdk/util.h 00:03:46.790 TEST_HEADER include/spdk/version.h 00:03:46.790 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:46.790 TEST_HEADER include/spdk/uuid.h 00:03:46.790 TEST_HEADER include/spdk/vhost.h 00:03:46.790 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:46.790 TEST_HEADER include/spdk/vmd.h 00:03:46.791 TEST_HEADER include/spdk/xor.h 00:03:46.791 TEST_HEADER include/spdk/zipf.h 00:03:46.791 CXX test/cpp_headers/accel.o 00:03:46.791 CC app/spdk_tgt/spdk_tgt.o 00:03:46.791 CXX test/cpp_headers/accel_module.o 00:03:46.791 CXX test/cpp_headers/assert.o 00:03:46.791 CXX test/cpp_headers/barrier.o 00:03:46.791 CXX test/cpp_headers/base64.o 00:03:46.791 CXX test/cpp_headers/bdev.o 00:03:46.791 CXX test/cpp_headers/bdev_module.o 00:03:46.791 CXX test/cpp_headers/bdev_zone.o 00:03:46.791 CXX test/cpp_headers/bit_pool.o 00:03:46.791 CXX test/cpp_headers/bit_array.o 00:03:46.791 CXX test/cpp_headers/blob_bdev.o 00:03:46.791 CXX test/cpp_headers/blobfs_bdev.o 00:03:46.791 CXX test/cpp_headers/blobfs.o 00:03:46.791 CXX test/cpp_headers/blob.o 00:03:46.791 CXX test/cpp_headers/config.o 00:03:46.791 CXX test/cpp_headers/crc16.o 00:03:46.791 CXX test/cpp_headers/conf.o 00:03:46.791 CXX test/cpp_headers/crc32.o 00:03:46.791 CXX test/cpp_headers/cpuset.o 00:03:46.791 CXX test/cpp_headers/crc64.o 00:03:46.791 CXX test/cpp_headers/dif.o 00:03:46.791 CXX test/cpp_headers/dma.o 00:03:46.791 CXX test/cpp_headers/endian.o 00:03:46.791 CXX test/cpp_headers/env_dpdk.o 00:03:46.791 CXX test/cpp_headers/env.o 00:03:46.791 CXX test/cpp_headers/event.o 00:03:46.791 CXX test/cpp_headers/fd_group.o 00:03:46.791 CXX test/cpp_headers/fd.o 00:03:46.791 CXX test/cpp_headers/file.o 00:03:46.791 CXX test/cpp_headers/fsdev.o 00:03:46.791 CXX test/cpp_headers/gpt_spec.o 00:03:46.791 CXX test/cpp_headers/fsdev_module.o 00:03:46.791 CXX test/cpp_headers/hexlify.o 00:03:46.791 CXX test/cpp_headers/ftl.o 00:03:46.791 CXX test/cpp_headers/histogram_data.o 00:03:46.791 CXX test/cpp_headers/idxd_spec.o 00:03:46.791 CXX test/cpp_headers/idxd.o 00:03:46.791 CXX test/cpp_headers/init.o 00:03:46.791 CXX test/cpp_headers/iscsi_spec.o 00:03:46.791 CXX test/cpp_headers/ioat.o 00:03:46.791 CXX test/cpp_headers/ioat_spec.o 00:03:46.791 CXX test/cpp_headers/jsonrpc.o 00:03:46.791 CXX test/cpp_headers/keyring.o 00:03:46.791 CXX test/cpp_headers/json.o 00:03:46.791 CXX test/cpp_headers/likely.o 00:03:46.791 CXX test/cpp_headers/keyring_module.o 00:03:46.791 CXX test/cpp_headers/log.o 00:03:46.791 CXX test/cpp_headers/lvol.o 00:03:46.791 CXX test/cpp_headers/md5.o 00:03:46.791 CXX test/cpp_headers/memory.o 00:03:46.791 CXX test/cpp_headers/mmio.o 00:03:46.791 CXX test/cpp_headers/nbd.o 00:03:46.791 CXX test/cpp_headers/net.o 00:03:46.791 CXX test/cpp_headers/notify.o 00:03:46.791 CXX test/cpp_headers/nvme.o 00:03:46.791 CXX test/cpp_headers/nvme_intel.o 00:03:46.791 CXX test/cpp_headers/nvme_ocssd.o 00:03:46.791 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:46.791 CXX test/cpp_headers/nvme_spec.o 00:03:46.791 CXX test/cpp_headers/nvmf_cmd.o 00:03:46.791 CXX test/cpp_headers/nvme_zns.o 00:03:46.791 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:46.791 CXX test/cpp_headers/nvmf.o 00:03:46.791 CXX test/cpp_headers/nvmf_spec.o 00:03:46.791 CXX test/cpp_headers/nvmf_transport.o 00:03:46.791 CXX test/cpp_headers/opal.o 00:03:46.791 CXX test/cpp_headers/opal_spec.o 00:03:46.791 CC examples/util/zipf/zipf.o 00:03:46.791 CXX test/cpp_headers/pci_ids.o 00:03:46.791 CC test/app/stub/stub.o 00:03:46.791 CC app/fio/nvme/fio_plugin.o 00:03:46.791 CC test/thread/poller_perf/poller_perf.o 00:03:46.791 CC test/env/memory/memory_ut.o 00:03:46.791 CC test/app/jsoncat/jsoncat.o 00:03:46.791 CC examples/ioat/perf/perf.o 00:03:46.791 CC test/app/histogram_perf/histogram_perf.o 00:03:46.791 CC test/env/pci/pci_ut.o 00:03:46.791 CC examples/ioat/verify/verify.o 00:03:46.791 CC app/fio/bdev/fio_plugin.o 00:03:46.791 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:46.791 CC test/env/vtophys/vtophys.o 00:03:46.791 CC test/dma/test_dma/test_dma.o 00:03:47.068 CC test/app/bdev_svc/bdev_svc.o 00:03:47.068 LINK spdk_lspci 00:03:47.068 LINK nvmf_tgt 00:03:47.330 LINK rpc_client_test 00:03:47.330 LINK spdk_trace_record 00:03:47.330 LINK interrupt_tgt 00:03:47.330 CC test/env/mem_callbacks/mem_callbacks.o 00:03:47.330 LINK spdk_nvme_discover 00:03:47.330 LINK iscsi_tgt 00:03:47.330 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:47.330 LINK jsoncat 00:03:47.330 CXX test/cpp_headers/pipe.o 00:03:47.330 LINK poller_perf 00:03:47.330 LINK histogram_perf 00:03:47.330 CXX test/cpp_headers/queue.o 00:03:47.330 CXX test/cpp_headers/reduce.o 00:03:47.330 CXX test/cpp_headers/scheduler.o 00:03:47.330 CXX test/cpp_headers/scsi.o 00:03:47.330 CXX test/cpp_headers/rpc.o 00:03:47.330 CXX test/cpp_headers/scsi_spec.o 00:03:47.330 CXX test/cpp_headers/sock.o 00:03:47.330 CXX test/cpp_headers/stdinc.o 00:03:47.330 CXX test/cpp_headers/thread.o 00:03:47.330 CXX test/cpp_headers/string.o 00:03:47.330 CXX test/cpp_headers/trace.o 00:03:47.330 CXX test/cpp_headers/trace_parser.o 00:03:47.330 CXX test/cpp_headers/tree.o 00:03:47.330 CXX test/cpp_headers/ublk.o 00:03:47.330 CXX test/cpp_headers/util.o 00:03:47.330 CXX test/cpp_headers/uuid.o 00:03:47.330 CXX test/cpp_headers/version.o 00:03:47.330 CXX test/cpp_headers/vfio_user_pci.o 00:03:47.330 CXX test/cpp_headers/vfio_user_spec.o 00:03:47.330 CXX test/cpp_headers/vhost.o 00:03:47.330 CXX test/cpp_headers/vmd.o 00:03:47.330 CXX test/cpp_headers/xor.o 00:03:47.330 CXX test/cpp_headers/zipf.o 00:03:47.330 LINK zipf 00:03:47.589 LINK bdev_svc 00:03:47.589 LINK ioat_perf 00:03:47.589 LINK spdk_tgt 00:03:47.589 LINK vtophys 00:03:47.589 LINK stub 00:03:47.589 LINK env_dpdk_post_init 00:03:47.589 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:47.589 LINK spdk_trace 00:03:47.589 LINK verify 00:03:47.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:47.589 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:47.589 LINK spdk_dd 00:03:47.847 LINK spdk_bdev 00:03:47.847 LINK pci_ut 00:03:47.847 LINK spdk_nvme 00:03:47.847 LINK spdk_nvme_identify 00:03:47.847 CC test/event/reactor/reactor.o 00:03:47.847 LINK test_dma 00:03:47.847 CC test/event/event_perf/event_perf.o 00:03:47.847 LINK spdk_nvme_perf 00:03:47.847 CC test/event/reactor_perf/reactor_perf.o 00:03:47.847 LINK spdk_top 00:03:47.848 CC test/event/app_repeat/app_repeat.o 00:03:47.848 LINK nvme_fuzz 00:03:47.848 CC test/event/scheduler/scheduler.o 00:03:47.848 CC examples/vmd/lsvmd/lsvmd.o 00:03:47.848 CC examples/vmd/led/led.o 00:03:48.106 CC examples/sock/hello_world/hello_sock.o 00:03:48.106 CC examples/idxd/perf/perf.o 00:03:48.106 CC examples/thread/thread/thread_ex.o 00:03:48.106 LINK vhost_fuzz 00:03:48.106 LINK mem_callbacks 00:03:48.106 CC app/vhost/vhost.o 00:03:48.106 LINK event_perf 00:03:48.106 LINK reactor_perf 00:03:48.106 LINK reactor 00:03:48.106 LINK lsvmd 00:03:48.106 LINK led 00:03:48.106 LINK app_repeat 00:03:48.106 LINK scheduler 00:03:48.106 LINK hello_sock 00:03:48.363 LINK thread 00:03:48.363 LINK vhost 00:03:48.363 LINK idxd_perf 00:03:48.363 LINK memory_ut 00:03:48.363 CC test/nvme/e2edp/nvme_dp.o 00:03:48.363 CC test/nvme/simple_copy/simple_copy.o 00:03:48.363 CC test/nvme/reserve/reserve.o 00:03:48.363 CC test/nvme/aer/aer.o 00:03:48.363 CC test/nvme/overhead/overhead.o 00:03:48.363 CC test/nvme/fdp/fdp.o 00:03:48.363 CC test/nvme/reset/reset.o 00:03:48.363 CC test/nvme/compliance/nvme_compliance.o 00:03:48.363 CC test/nvme/startup/startup.o 00:03:48.363 CC test/nvme/fused_ordering/fused_ordering.o 00:03:48.363 CC test/nvme/sgl/sgl.o 00:03:48.363 CC test/nvme/cuse/cuse.o 00:03:48.363 CC test/nvme/err_injection/err_injection.o 00:03:48.363 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:48.363 CC test/nvme/boot_partition/boot_partition.o 00:03:48.363 CC test/nvme/connect_stress/connect_stress.o 00:03:48.363 CC test/blobfs/mkfs/mkfs.o 00:03:48.363 CC test/accel/dif/dif.o 00:03:48.620 CC test/lvol/esnap/esnap.o 00:03:48.620 LINK startup 00:03:48.620 LINK doorbell_aers 00:03:48.620 LINK connect_stress 00:03:48.620 LINK err_injection 00:03:48.620 LINK reserve 00:03:48.620 LINK simple_copy 00:03:48.620 LINK boot_partition 00:03:48.620 LINK fused_ordering 00:03:48.620 LINK reset 00:03:48.620 LINK mkfs 00:03:48.620 LINK nvme_dp 00:03:48.620 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:48.620 CC examples/nvme/abort/abort.o 00:03:48.620 LINK overhead 00:03:48.620 CC examples/nvme/reconnect/reconnect.o 00:03:48.620 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:48.620 CC examples/nvme/arbitration/arbitration.o 00:03:48.620 CC examples/nvme/hotplug/hotplug.o 00:03:48.620 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:48.620 LINK aer 00:03:48.620 CC examples/nvme/hello_world/hello_world.o 00:03:48.879 LINK sgl 00:03:48.879 LINK nvme_compliance 00:03:48.879 LINK fdp 00:03:48.879 CC examples/accel/perf/accel_perf.o 00:03:48.879 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:48.879 CC examples/blob/cli/blobcli.o 00:03:48.879 CC examples/blob/hello_world/hello_blob.o 00:03:48.879 LINK pmr_persistence 00:03:48.879 LINK cmb_copy 00:03:48.879 LINK hello_world 00:03:48.879 LINK hotplug 00:03:49.137 LINK arbitration 00:03:49.137 LINK iscsi_fuzz 00:03:49.137 LINK reconnect 00:03:49.137 LINK abort 00:03:49.137 LINK hello_blob 00:03:49.137 LINK dif 00:03:49.137 LINK hello_fsdev 00:03:49.137 LINK nvme_manage 00:03:49.137 LINK accel_perf 00:03:49.137 LINK blobcli 00:03:49.705 LINK cuse 00:03:49.705 CC test/bdev/bdevio/bdevio.o 00:03:49.705 CC examples/bdev/hello_world/hello_bdev.o 00:03:49.705 CC examples/bdev/bdevperf/bdevperf.o 00:03:49.963 LINK hello_bdev 00:03:49.963 LINK bdevio 00:03:50.221 LINK bdevperf 00:03:50.788 CC examples/nvmf/nvmf/nvmf.o 00:03:51.046 LINK nvmf 00:03:52.423 LINK esnap 00:03:52.423 00:03:52.423 real 0m55.239s 00:03:52.423 user 6m49.632s 00:03:52.423 sys 2m54.114s 00:03:52.423 06:09:43 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:52.423 06:09:43 make -- common/autotest_common.sh@10 -- $ set +x 00:03:52.423 ************************************ 00:03:52.423 END TEST make 00:03:52.423 ************************************ 00:03:52.423 06:09:43 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:52.423 06:09:43 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:52.423 06:09:43 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:52.423 06:09:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.423 06:09:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:52.423 06:09:43 -- pm/common@44 -- $ pid=675333 00:03:52.423 06:09:43 -- pm/common@50 -- $ kill -TERM 675333 00:03:52.423 06:09:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.423 06:09:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:52.423 06:09:43 -- pm/common@44 -- $ pid=675335 00:03:52.423 06:09:43 -- pm/common@50 -- $ kill -TERM 675335 00:03:52.423 06:09:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.423 06:09:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:52.423 06:09:43 -- pm/common@44 -- $ pid=675337 00:03:52.423 06:09:43 -- pm/common@50 -- $ kill -TERM 675337 00:03:52.423 06:09:43 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.423 06:09:43 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:52.423 06:09:43 -- pm/common@44 -- $ pid=675363 00:03:52.423 06:09:43 -- pm/common@50 -- $ sudo -E kill -TERM 675363 00:03:52.423 06:09:44 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:52.423 06:09:44 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:52.682 06:09:44 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:52.682 06:09:44 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:52.682 06:09:44 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:52.682 06:09:44 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:52.682 06:09:44 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.682 06:09:44 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.682 06:09:44 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.682 06:09:44 -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.682 06:09:44 -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.682 06:09:44 -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.682 06:09:44 -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.682 06:09:44 -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.682 06:09:44 -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.682 06:09:44 -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.682 06:09:44 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.682 06:09:44 -- scripts/common.sh@344 -- # case "$op" in 00:03:52.683 06:09:44 -- scripts/common.sh@345 -- # : 1 00:03:52.683 06:09:44 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.683 06:09:44 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.683 06:09:44 -- scripts/common.sh@365 -- # decimal 1 00:03:52.683 06:09:44 -- scripts/common.sh@353 -- # local d=1 00:03:52.683 06:09:44 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.683 06:09:44 -- scripts/common.sh@355 -- # echo 1 00:03:52.683 06:09:44 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.683 06:09:44 -- scripts/common.sh@366 -- # decimal 2 00:03:52.683 06:09:44 -- scripts/common.sh@353 -- # local d=2 00:03:52.683 06:09:44 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.683 06:09:44 -- scripts/common.sh@355 -- # echo 2 00:03:52.683 06:09:44 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.683 06:09:44 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.683 06:09:44 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.683 06:09:44 -- scripts/common.sh@368 -- # return 0 00:03:52.683 06:09:44 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.683 06:09:44 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --rc geninfo_unexecuted_blocks=1 00:03:52.683 00:03:52.683 ' 00:03:52.683 06:09:44 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --rc geninfo_unexecuted_blocks=1 00:03:52.683 00:03:52.683 ' 00:03:52.683 06:09:44 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --rc geninfo_unexecuted_blocks=1 00:03:52.683 00:03:52.683 ' 00:03:52.683 06:09:44 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:52.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.683 --rc genhtml_branch_coverage=1 00:03:52.683 --rc genhtml_function_coverage=1 00:03:52.683 --rc genhtml_legend=1 00:03:52.683 --rc geninfo_all_blocks=1 00:03:52.683 --rc geninfo_unexecuted_blocks=1 00:03:52.683 00:03:52.683 ' 00:03:52.683 06:09:44 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:52.683 06:09:44 -- nvmf/common.sh@7 -- # uname -s 00:03:52.683 06:09:44 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.683 06:09:44 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.683 06:09:44 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.683 06:09:44 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.683 06:09:44 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:52.683 06:09:44 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:52.683 06:09:44 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.683 06:09:44 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:52.683 06:09:44 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.683 06:09:44 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.683 06:09:44 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:03:52.683 06:09:44 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:03:52.683 06:09:44 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.683 06:09:44 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.683 06:09:44 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:52.683 06:09:44 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.683 06:09:44 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.683 06:09:44 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.683 06:09:44 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.683 06:09:44 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.683 06:09:44 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.683 06:09:44 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.683 06:09:44 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.683 06:09:44 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.683 06:09:44 -- paths/export.sh@5 -- # export PATH 00:03:52.683 06:09:44 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.683 06:09:44 -- nvmf/common.sh@51 -- # : 0 00:03:52.683 06:09:44 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:52.683 06:09:44 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:52.683 06:09:44 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.683 06:09:44 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.683 06:09:44 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.683 06:09:44 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:52.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:52.683 06:09:44 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:52.683 06:09:44 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:52.683 06:09:44 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:52.683 06:09:44 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.683 06:09:44 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.683 06:09:44 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.683 06:09:44 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.683 06:09:44 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.683 06:09:44 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.683 06:09:44 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.683 06:09:44 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.683 06:09:44 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.683 06:09:44 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.683 06:09:44 -- spdk/autotest.sh@48 -- # udevadm_pid=756014 00:03:52.683 06:09:44 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.683 06:09:44 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.683 06:09:44 -- pm/common@17 -- # local monitor 00:03:52.683 06:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.683 06:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.683 06:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.683 06:09:44 -- pm/common@21 -- # date +%s 00:03:52.683 06:09:44 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.683 06:09:44 -- pm/common@21 -- # date +%s 00:03:52.683 06:09:44 -- pm/common@25 -- # sleep 1 00:03:52.683 06:09:44 -- pm/common@21 -- # date +%s 00:03:52.683 06:09:44 -- pm/common@21 -- # date +%s 00:03:52.683 06:09:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734066584 00:03:52.683 06:09:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734066584 00:03:52.683 06:09:44 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734066584 00:03:52.683 06:09:44 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734066584 00:03:52.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734066584_collect-cpu-load.pm.log 00:03:52.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734066584_collect-vmstat.pm.log 00:03:52.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734066584_collect-cpu-temp.pm.log 00:03:52.683 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734066584_collect-bmc-pm.bmc.pm.log 00:03:53.625 06:09:45 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.625 06:09:45 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.625 06:09:45 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:53.625 06:09:45 -- common/autotest_common.sh@10 -- # set +x 00:03:53.625 06:09:45 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.625 06:09:45 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:53.625 06:09:45 -- common/autotest_common.sh@10 -- # set +x 00:03:53.885 06:09:45 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:53.885 06:09:45 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.885 06:09:45 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.885 06:09:45 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:53.885 06:09:45 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.885 06:09:45 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.885 06:09:45 -- common/autotest_common.sh@1457 -- # uname 00:03:53.885 06:09:45 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:53.885 06:09:45 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.885 06:09:45 -- common/autotest_common.sh@1477 -- # uname 00:03:53.885 06:09:45 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:53.885 06:09:45 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:53.885 06:09:45 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:53.885 lcov: LCOV version 1.15 00:03:53.885 06:09:45 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:15.821 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:15.821 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:19.109 06:10:10 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:19.109 06:10:10 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:19.109 06:10:10 -- common/autotest_common.sh@10 -- # set +x 00:04:19.109 06:10:10 -- spdk/autotest.sh@78 -- # rm -f 00:04:19.109 06:10:10 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.645 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:21.645 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:21.645 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:21.905 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:21.905 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:21.905 06:10:13 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:21.905 06:10:13 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:21.905 06:10:13 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:21.905 06:10:13 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:21.905 06:10:13 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:21.905 06:10:13 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:21.905 06:10:13 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:21.905 06:10:13 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:21.905 06:10:13 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:21.905 06:10:13 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:21.905 06:10:13 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:21.905 06:10:13 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:21.905 06:10:13 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:21.905 06:10:13 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:21.905 06:10:13 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:21.905 06:10:13 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:21.905 06:10:13 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:21.905 06:10:13 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:21.905 06:10:13 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:21.905 No valid GPT data, bailing 00:04:21.905 06:10:13 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:21.905 06:10:13 -- scripts/common.sh@394 -- # pt= 00:04:21.905 06:10:13 -- scripts/common.sh@395 -- # return 1 00:04:21.905 06:10:13 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:21.905 1+0 records in 00:04:21.905 1+0 records out 00:04:21.905 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00168951 s, 621 MB/s 00:04:21.905 06:10:13 -- spdk/autotest.sh@105 -- # sync 00:04:21.905 06:10:13 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:21.905 06:10:13 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:21.905 06:10:13 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:28.472 06:10:18 -- spdk/autotest.sh@111 -- # uname -s 00:04:28.472 06:10:18 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:28.472 06:10:18 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:28.472 06:10:18 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:30.377 Hugepages 00:04:30.377 node hugesize free / total 00:04:30.377 node0 1048576kB 0 / 0 00:04:30.377 node0 2048kB 0 / 0 00:04:30.377 node1 1048576kB 0 / 0 00:04:30.377 node1 2048kB 0 / 0 00:04:30.377 00:04:30.377 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:30.377 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:30.377 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:30.377 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:30.377 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:30.377 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:30.377 06:10:21 -- spdk/autotest.sh@117 -- # uname -s 00:04:30.377 06:10:21 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:30.377 06:10:21 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:30.377 06:10:21 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:33.667 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:33.667 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:33.926 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:34.185 06:10:25 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:35.122 06:10:26 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:35.122 06:10:26 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:35.122 06:10:26 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:35.122 06:10:26 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:35.122 06:10:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:35.122 06:10:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:35.122 06:10:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:35.122 06:10:26 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:35.122 06:10:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:35.122 06:10:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:35.122 06:10:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:35.122 06:10:26 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:38.412 Waiting for block devices as requested 00:04:38.412 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:38.412 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.412 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.412 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.412 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:38.412 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:38.412 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:38.670 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:38.670 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:38.670 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:38.929 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:38.929 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:38.929 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:39.187 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:39.187 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:39.187 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:39.187 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:39.446 06:10:30 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:39.446 06:10:30 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:39.446 06:10:30 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:39.446 06:10:30 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:39.446 06:10:30 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:39.446 06:10:30 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:39.446 06:10:30 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:39.446 06:10:30 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:39.446 06:10:30 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:39.446 06:10:30 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:39.446 06:10:30 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:39.446 06:10:30 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:39.446 06:10:30 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:39.446 06:10:30 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:39.446 06:10:30 -- common/autotest_common.sh@1543 -- # continue 00:04:39.446 06:10:30 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:39.446 06:10:30 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:39.446 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:04:39.446 06:10:30 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:39.446 06:10:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:39.446 06:10:30 -- common/autotest_common.sh@10 -- # set +x 00:04:39.446 06:10:30 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.803 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:42.803 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:43.063 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.322 06:10:34 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:43.322 06:10:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:43.322 06:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.322 06:10:34 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:43.322 06:10:34 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:43.322 06:10:34 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:43.322 06:10:34 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:43.322 06:10:34 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:43.322 06:10:34 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:43.322 06:10:34 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:43.322 06:10:34 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:43.322 06:10:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:43.322 06:10:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:43.322 06:10:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:43.322 06:10:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:43.322 06:10:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:43.322 06:10:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:43.322 06:10:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:43.322 06:10:34 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:43.322 06:10:34 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:43.322 06:10:34 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:43.322 06:10:34 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:43.322 06:10:34 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:43.322 06:10:34 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:43.322 06:10:34 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:43.322 06:10:34 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:43.322 06:10:34 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=770206 00:04:43.322 06:10:34 -- common/autotest_common.sh@1585 -- # waitforlisten 770206 00:04:43.322 06:10:34 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:43.322 06:10:34 -- common/autotest_common.sh@835 -- # '[' -z 770206 ']' 00:04:43.322 06:10:34 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.322 06:10:34 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.322 06:10:34 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.322 06:10:34 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.322 06:10:34 -- common/autotest_common.sh@10 -- # set +x 00:04:43.581 [2024-12-13 06:10:35.025534] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:43.581 [2024-12-13 06:10:35.025584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid770206 ] 00:04:43.581 [2024-12-13 06:10:35.102090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.581 [2024-12-13 06:10:35.124129] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.840 06:10:35 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.840 06:10:35 -- common/autotest_common.sh@868 -- # return 0 00:04:43.840 06:10:35 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:43.840 06:10:35 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:43.840 06:10:35 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:47.128 nvme0n1 00:04:47.128 06:10:38 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:47.128 [2024-12-13 06:10:38.493643] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:04:47.128 [2024-12-13 06:10:38.493673] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:04:47.128 request: 00:04:47.128 { 00:04:47.128 "nvme_ctrlr_name": "nvme0", 00:04:47.128 "password": "test", 00:04:47.128 "method": "bdev_nvme_opal_revert", 00:04:47.128 "req_id": 1 00:04:47.128 } 00:04:47.128 Got JSON-RPC error response 00:04:47.128 response: 00:04:47.128 { 00:04:47.128 "code": -32603, 00:04:47.128 "message": "Internal error" 00:04:47.128 } 00:04:47.128 06:10:38 -- common/autotest_common.sh@1591 -- # true 00:04:47.128 06:10:38 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:47.128 06:10:38 -- common/autotest_common.sh@1595 -- # killprocess 770206 00:04:47.128 06:10:38 -- common/autotest_common.sh@954 -- # '[' -z 770206 ']' 00:04:47.128 06:10:38 -- common/autotest_common.sh@958 -- # kill -0 770206 00:04:47.128 06:10:38 -- common/autotest_common.sh@959 -- # uname 00:04:47.128 06:10:38 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:47.128 06:10:38 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 770206 00:04:47.128 06:10:38 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:47.128 06:10:38 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:47.128 06:10:38 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 770206' 00:04:47.128 killing process with pid 770206 00:04:47.128 06:10:38 -- common/autotest_common.sh@973 -- # kill 770206 00:04:47.128 06:10:38 -- common/autotest_common.sh@978 -- # wait 770206 00:04:48.506 06:10:40 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:48.506 06:10:40 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:48.506 06:10:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.506 06:10:40 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:48.506 06:10:40 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:48.506 06:10:40 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.506 06:10:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.506 06:10:40 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:48.506 06:10:40 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.506 06:10:40 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.506 06:10:40 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.506 06:10:40 -- common/autotest_common.sh@10 -- # set +x 00:04:48.765 ************************************ 00:04:48.765 START TEST env 00:04:48.765 ************************************ 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:48.765 * Looking for test storage... 00:04:48.765 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.765 06:10:40 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.765 06:10:40 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.765 06:10:40 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.765 06:10:40 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.765 06:10:40 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.765 06:10:40 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.765 06:10:40 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.765 06:10:40 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.765 06:10:40 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.765 06:10:40 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.765 06:10:40 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.765 06:10:40 env -- scripts/common.sh@344 -- # case "$op" in 00:04:48.765 06:10:40 env -- scripts/common.sh@345 -- # : 1 00:04:48.765 06:10:40 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.765 06:10:40 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.765 06:10:40 env -- scripts/common.sh@365 -- # decimal 1 00:04:48.765 06:10:40 env -- scripts/common.sh@353 -- # local d=1 00:04:48.765 06:10:40 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.765 06:10:40 env -- scripts/common.sh@355 -- # echo 1 00:04:48.765 06:10:40 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.765 06:10:40 env -- scripts/common.sh@366 -- # decimal 2 00:04:48.765 06:10:40 env -- scripts/common.sh@353 -- # local d=2 00:04:48.765 06:10:40 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.765 06:10:40 env -- scripts/common.sh@355 -- # echo 2 00:04:48.765 06:10:40 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.765 06:10:40 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.765 06:10:40 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.765 06:10:40 env -- scripts/common.sh@368 -- # return 0 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.765 --rc genhtml_branch_coverage=1 00:04:48.765 --rc genhtml_function_coverage=1 00:04:48.765 --rc genhtml_legend=1 00:04:48.765 --rc geninfo_all_blocks=1 00:04:48.765 --rc geninfo_unexecuted_blocks=1 00:04:48.765 00:04:48.765 ' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.765 --rc genhtml_branch_coverage=1 00:04:48.765 --rc genhtml_function_coverage=1 00:04:48.765 --rc genhtml_legend=1 00:04:48.765 --rc geninfo_all_blocks=1 00:04:48.765 --rc geninfo_unexecuted_blocks=1 00:04:48.765 00:04:48.765 ' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.765 --rc genhtml_branch_coverage=1 00:04:48.765 --rc genhtml_function_coverage=1 00:04:48.765 --rc genhtml_legend=1 00:04:48.765 --rc geninfo_all_blocks=1 00:04:48.765 --rc geninfo_unexecuted_blocks=1 00:04:48.765 00:04:48.765 ' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.765 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.765 --rc genhtml_branch_coverage=1 00:04:48.765 --rc genhtml_function_coverage=1 00:04:48.765 --rc genhtml_legend=1 00:04:48.765 --rc geninfo_all_blocks=1 00:04:48.765 --rc geninfo_unexecuted_blocks=1 00:04:48.765 00:04:48.765 ' 00:04:48.765 06:10:40 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.765 06:10:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.765 06:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:48.765 ************************************ 00:04:48.765 START TEST env_memory 00:04:48.765 ************************************ 00:04:48.765 06:10:40 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:48.765 00:04:48.765 00:04:48.765 CUnit - A unit testing framework for C - Version 2.1-3 00:04:48.765 http://cunit.sourceforge.net/ 00:04:48.765 00:04:48.765 00:04:48.765 Suite: memory 00:04:48.765 Test: alloc and free memory map ...[2024-12-13 06:10:40.415785] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:49.025 passed 00:04:49.025 Test: mem map translation ...[2024-12-13 06:10:40.434810] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:49.025 [2024-12-13 06:10:40.434825] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:49.025 [2024-12-13 06:10:40.434861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:49.025 [2024-12-13 06:10:40.434868] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:49.025 passed 00:04:49.025 Test: mem map registration ...[2024-12-13 06:10:40.473242] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:49.025 [2024-12-13 06:10:40.473257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:49.025 passed 00:04:49.025 Test: mem map adjacent registrations ...passed 00:04:49.025 00:04:49.025 Run Summary: Type Total Ran Passed Failed Inactive 00:04:49.025 suites 1 1 n/a 0 0 00:04:49.025 tests 4 4 4 0 0 00:04:49.025 asserts 152 152 152 0 n/a 00:04:49.025 00:04:49.025 Elapsed time = 0.138 seconds 00:04:49.025 00:04:49.025 real 0m0.151s 00:04:49.025 user 0m0.138s 00:04:49.025 sys 0m0.012s 00:04:49.025 06:10:40 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.025 06:10:40 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:49.025 ************************************ 00:04:49.025 END TEST env_memory 00:04:49.025 ************************************ 00:04:49.025 06:10:40 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.025 06:10:40 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.025 06:10:40 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.025 06:10:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:49.025 ************************************ 00:04:49.025 START TEST env_vtophys 00:04:49.025 ************************************ 00:04:49.025 06:10:40 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:49.025 EAL: lib.eal log level changed from notice to debug 00:04:49.025 EAL: Detected lcore 0 as core 0 on socket 0 00:04:49.025 EAL: Detected lcore 1 as core 1 on socket 0 00:04:49.025 EAL: Detected lcore 2 as core 2 on socket 0 00:04:49.025 EAL: Detected lcore 3 as core 3 on socket 0 00:04:49.025 EAL: Detected lcore 4 as core 4 on socket 0 00:04:49.025 EAL: Detected lcore 5 as core 5 on socket 0 00:04:49.025 EAL: Detected lcore 6 as core 6 on socket 0 00:04:49.025 EAL: Detected lcore 7 as core 8 on socket 0 00:04:49.025 EAL: Detected lcore 8 as core 9 on socket 0 00:04:49.025 EAL: Detected lcore 9 as core 10 on socket 0 00:04:49.025 EAL: Detected lcore 10 as core 11 on socket 0 00:04:49.025 EAL: Detected lcore 11 as core 12 on socket 0 00:04:49.025 EAL: Detected lcore 12 as core 13 on socket 0 00:04:49.025 EAL: Detected lcore 13 as core 16 on socket 0 00:04:49.025 EAL: Detected lcore 14 as core 17 on socket 0 00:04:49.025 EAL: Detected lcore 15 as core 18 on socket 0 00:04:49.025 EAL: Detected lcore 16 as core 19 on socket 0 00:04:49.025 EAL: Detected lcore 17 as core 20 on socket 0 00:04:49.025 EAL: Detected lcore 18 as core 21 on socket 0 00:04:49.025 EAL: Detected lcore 19 as core 25 on socket 0 00:04:49.025 EAL: Detected lcore 20 as core 26 on socket 0 00:04:49.025 EAL: Detected lcore 21 as core 27 on socket 0 00:04:49.025 EAL: Detected lcore 22 as core 28 on socket 0 00:04:49.025 EAL: Detected lcore 23 as core 29 on socket 0 00:04:49.025 EAL: Detected lcore 24 as core 0 on socket 1 00:04:49.025 EAL: Detected lcore 25 as core 1 on socket 1 00:04:49.025 EAL: Detected lcore 26 as core 2 on socket 1 00:04:49.025 EAL: Detected lcore 27 as core 3 on socket 1 00:04:49.025 EAL: Detected lcore 28 as core 4 on socket 1 00:04:49.025 EAL: Detected lcore 29 as core 5 on socket 1 00:04:49.025 EAL: Detected lcore 30 as core 6 on socket 1 00:04:49.025 EAL: Detected lcore 31 as core 8 on socket 1 00:04:49.025 EAL: Detected lcore 32 as core 9 on socket 1 00:04:49.025 EAL: Detected lcore 33 as core 10 on socket 1 00:04:49.025 EAL: Detected lcore 34 as core 11 on socket 1 00:04:49.025 EAL: Detected lcore 35 as core 12 on socket 1 00:04:49.025 EAL: Detected lcore 36 as core 13 on socket 1 00:04:49.025 EAL: Detected lcore 37 as core 16 on socket 1 00:04:49.025 EAL: Detected lcore 38 as core 17 on socket 1 00:04:49.025 EAL: Detected lcore 39 as core 18 on socket 1 00:04:49.025 EAL: Detected lcore 40 as core 19 on socket 1 00:04:49.025 EAL: Detected lcore 41 as core 20 on socket 1 00:04:49.025 EAL: Detected lcore 42 as core 21 on socket 1 00:04:49.025 EAL: Detected lcore 43 as core 25 on socket 1 00:04:49.025 EAL: Detected lcore 44 as core 26 on socket 1 00:04:49.025 EAL: Detected lcore 45 as core 27 on socket 1 00:04:49.025 EAL: Detected lcore 46 as core 28 on socket 1 00:04:49.025 EAL: Detected lcore 47 as core 29 on socket 1 00:04:49.025 EAL: Detected lcore 48 as core 0 on socket 0 00:04:49.025 EAL: Detected lcore 49 as core 1 on socket 0 00:04:49.025 EAL: Detected lcore 50 as core 2 on socket 0 00:04:49.025 EAL: Detected lcore 51 as core 3 on socket 0 00:04:49.025 EAL: Detected lcore 52 as core 4 on socket 0 00:04:49.025 EAL: Detected lcore 53 as core 5 on socket 0 00:04:49.025 EAL: Detected lcore 54 as core 6 on socket 0 00:04:49.025 EAL: Detected lcore 55 as core 8 on socket 0 00:04:49.025 EAL: Detected lcore 56 as core 9 on socket 0 00:04:49.025 EAL: Detected lcore 57 as core 10 on socket 0 00:04:49.025 EAL: Detected lcore 58 as core 11 on socket 0 00:04:49.025 EAL: Detected lcore 59 as core 12 on socket 0 00:04:49.025 EAL: Detected lcore 60 as core 13 on socket 0 00:04:49.025 EAL: Detected lcore 61 as core 16 on socket 0 00:04:49.025 EAL: Detected lcore 62 as core 17 on socket 0 00:04:49.025 EAL: Detected lcore 63 as core 18 on socket 0 00:04:49.025 EAL: Detected lcore 64 as core 19 on socket 0 00:04:49.025 EAL: Detected lcore 65 as core 20 on socket 0 00:04:49.025 EAL: Detected lcore 66 as core 21 on socket 0 00:04:49.025 EAL: Detected lcore 67 as core 25 on socket 0 00:04:49.025 EAL: Detected lcore 68 as core 26 on socket 0 00:04:49.025 EAL: Detected lcore 69 as core 27 on socket 0 00:04:49.025 EAL: Detected lcore 70 as core 28 on socket 0 00:04:49.025 EAL: Detected lcore 71 as core 29 on socket 0 00:04:49.025 EAL: Detected lcore 72 as core 0 on socket 1 00:04:49.025 EAL: Detected lcore 73 as core 1 on socket 1 00:04:49.025 EAL: Detected lcore 74 as core 2 on socket 1 00:04:49.025 EAL: Detected lcore 75 as core 3 on socket 1 00:04:49.025 EAL: Detected lcore 76 as core 4 on socket 1 00:04:49.025 EAL: Detected lcore 77 as core 5 on socket 1 00:04:49.025 EAL: Detected lcore 78 as core 6 on socket 1 00:04:49.025 EAL: Detected lcore 79 as core 8 on socket 1 00:04:49.025 EAL: Detected lcore 80 as core 9 on socket 1 00:04:49.025 EAL: Detected lcore 81 as core 10 on socket 1 00:04:49.025 EAL: Detected lcore 82 as core 11 on socket 1 00:04:49.025 EAL: Detected lcore 83 as core 12 on socket 1 00:04:49.025 EAL: Detected lcore 84 as core 13 on socket 1 00:04:49.025 EAL: Detected lcore 85 as core 16 on socket 1 00:04:49.025 EAL: Detected lcore 86 as core 17 on socket 1 00:04:49.025 EAL: Detected lcore 87 as core 18 on socket 1 00:04:49.025 EAL: Detected lcore 88 as core 19 on socket 1 00:04:49.025 EAL: Detected lcore 89 as core 20 on socket 1 00:04:49.025 EAL: Detected lcore 90 as core 21 on socket 1 00:04:49.025 EAL: Detected lcore 91 as core 25 on socket 1 00:04:49.025 EAL: Detected lcore 92 as core 26 on socket 1 00:04:49.025 EAL: Detected lcore 93 as core 27 on socket 1 00:04:49.025 EAL: Detected lcore 94 as core 28 on socket 1 00:04:49.025 EAL: Detected lcore 95 as core 29 on socket 1 00:04:49.025 EAL: Maximum logical cores by configuration: 128 00:04:49.025 EAL: Detected CPU lcores: 96 00:04:49.025 EAL: Detected NUMA nodes: 2 00:04:49.025 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:04:49.025 EAL: Detected shared linkage of DPDK 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:04:49.025 EAL: Registered [vdev] bus. 00:04:49.025 EAL: bus.vdev log level changed from disabled to notice 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:04:49.025 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:04:49.025 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:04:49.025 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:04:49.025 EAL: No shared files mode enabled, IPC will be disabled 00:04:49.025 EAL: No shared files mode enabled, IPC is disabled 00:04:49.025 EAL: Bus pci wants IOVA as 'DC' 00:04:49.025 EAL: Bus vdev wants IOVA as 'DC' 00:04:49.025 EAL: Buses did not request a specific IOVA mode. 00:04:49.025 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:49.025 EAL: Selected IOVA mode 'VA' 00:04:49.025 EAL: Probing VFIO support... 00:04:49.026 EAL: IOMMU type 1 (Type 1) is supported 00:04:49.026 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:49.026 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:49.026 EAL: VFIO support initialized 00:04:49.026 EAL: Ask a virtual area of 0x2e000 bytes 00:04:49.026 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:49.026 EAL: Setting up physically contiguous memory... 00:04:49.026 EAL: Setting maximum number of open files to 524288 00:04:49.026 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:49.026 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:49.026 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:49.026 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:49.026 EAL: Ask a virtual area of 0x61000 bytes 00:04:49.026 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:49.026 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:49.026 EAL: Ask a virtual area of 0x400000000 bytes 00:04:49.026 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:49.026 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:49.026 EAL: Hugepages will be freed exactly as allocated. 00:04:49.026 EAL: No shared files mode enabled, IPC is disabled 00:04:49.026 EAL: No shared files mode enabled, IPC is disabled 00:04:49.026 EAL: TSC frequency is ~2100000 KHz 00:04:49.026 EAL: Main lcore 0 is ready (tid=7f92ba85aa00;cpuset=[0]) 00:04:49.026 EAL: Trying to obtain current memory policy. 00:04:49.026 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.026 EAL: Restoring previous memory policy: 0 00:04:49.026 EAL: request: mp_malloc_sync 00:04:49.026 EAL: No shared files mode enabled, IPC is disabled 00:04:49.026 EAL: Heap on socket 0 was expanded by 2MB 00:04:49.026 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:04:49.026 EAL: probe driver: 8086:37d2 net_i40e 00:04:49.026 EAL: Not managed by a supported kernel driver, skipped 00:04:49.026 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:04:49.026 EAL: probe driver: 8086:37d2 net_i40e 00:04:49.026 EAL: Not managed by a supported kernel driver, skipped 00:04:49.026 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:49.285 EAL: Mem event callback 'spdk:(nil)' registered 00:04:49.285 00:04:49.285 00:04:49.285 CUnit - A unit testing framework for C - Version 2.1-3 00:04:49.285 http://cunit.sourceforge.net/ 00:04:49.285 00:04:49.285 00:04:49.285 Suite: components_suite 00:04:49.285 Test: vtophys_malloc_test ...passed 00:04:49.285 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:49.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.285 EAL: Restoring previous memory policy: 4 00:04:49.285 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.285 EAL: request: mp_malloc_sync 00:04:49.285 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: Heap on socket 0 was expanded by 4MB 00:04:49.285 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.285 EAL: request: mp_malloc_sync 00:04:49.285 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: Heap on socket 0 was shrunk by 4MB 00:04:49.285 EAL: Trying to obtain current memory policy. 00:04:49.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.285 EAL: Restoring previous memory policy: 4 00:04:49.285 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.285 EAL: request: mp_malloc_sync 00:04:49.285 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: Heap on socket 0 was expanded by 6MB 00:04:49.285 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.285 EAL: request: mp_malloc_sync 00:04:49.285 EAL: No shared files mode enabled, IPC is disabled 00:04:49.285 EAL: Heap on socket 0 was shrunk by 6MB 00:04:49.285 EAL: Trying to obtain current memory policy. 00:04:49.285 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.285 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 10MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 10MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.286 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 18MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 18MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.286 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 34MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 34MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.286 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 66MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 66MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.286 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.286 EAL: Restoring previous memory policy: 4 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.286 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.286 EAL: request: mp_malloc_sync 00:04:49.286 EAL: No shared files mode enabled, IPC is disabled 00:04:49.286 EAL: Heap on socket 0 was shrunk by 258MB 00:04:49.286 EAL: Trying to obtain current memory policy. 00:04:49.286 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.545 EAL: Restoring previous memory policy: 4 00:04:49.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.545 EAL: request: mp_malloc_sync 00:04:49.545 EAL: No shared files mode enabled, IPC is disabled 00:04:49.545 EAL: Heap on socket 0 was expanded by 514MB 00:04:49.545 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.545 EAL: request: mp_malloc_sync 00:04:49.545 EAL: No shared files mode enabled, IPC is disabled 00:04:49.545 EAL: Heap on socket 0 was shrunk by 514MB 00:04:49.545 EAL: Trying to obtain current memory policy. 00:04:49.545 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.804 EAL: Restoring previous memory policy: 4 00:04:49.804 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.804 EAL: request: mp_malloc_sync 00:04:49.804 EAL: No shared files mode enabled, IPC is disabled 00:04:49.804 EAL: Heap on socket 0 was expanded by 1026MB 00:04:50.063 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.063 EAL: request: mp_malloc_sync 00:04:50.063 EAL: No shared files mode enabled, IPC is disabled 00:04:50.063 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:50.063 passed 00:04:50.063 00:04:50.063 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.063 suites 1 1 n/a 0 0 00:04:50.063 tests 2 2 2 0 0 00:04:50.063 asserts 497 497 497 0 n/a 00:04:50.063 00:04:50.063 Elapsed time = 0.968 seconds 00:04:50.063 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.063 EAL: request: mp_malloc_sync 00:04:50.063 EAL: No shared files mode enabled, IPC is disabled 00:04:50.063 EAL: Heap on socket 0 was shrunk by 2MB 00:04:50.063 EAL: No shared files mode enabled, IPC is disabled 00:04:50.063 EAL: No shared files mode enabled, IPC is disabled 00:04:50.063 EAL: No shared files mode enabled, IPC is disabled 00:04:50.063 00:04:50.063 real 0m1.108s 00:04:50.063 user 0m0.639s 00:04:50.063 sys 0m0.437s 00:04:50.063 06:10:41 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.063 06:10:41 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:50.063 ************************************ 00:04:50.063 END TEST env_vtophys 00:04:50.063 ************************************ 00:04:50.322 06:10:41 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.322 06:10:41 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.322 06:10:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.322 06:10:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.322 ************************************ 00:04:50.322 START TEST env_pci 00:04:50.322 ************************************ 00:04:50.322 06:10:41 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:50.322 00:04:50.322 00:04:50.322 CUnit - A unit testing framework for C - Version 2.1-3 00:04:50.322 http://cunit.sourceforge.net/ 00:04:50.322 00:04:50.322 00:04:50.322 Suite: pci 00:04:50.322 Test: pci_hook ...[2024-12-13 06:10:41.785247] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 771431 has claimed it 00:04:50.322 EAL: Cannot find device (10000:00:01.0) 00:04:50.322 EAL: Failed to attach device on primary process 00:04:50.322 passed 00:04:50.322 00:04:50.322 Run Summary: Type Total Ran Passed Failed Inactive 00:04:50.322 suites 1 1 n/a 0 0 00:04:50.322 tests 1 1 1 0 0 00:04:50.322 asserts 25 25 25 0 n/a 00:04:50.322 00:04:50.322 Elapsed time = 0.026 seconds 00:04:50.322 00:04:50.322 real 0m0.045s 00:04:50.322 user 0m0.009s 00:04:50.322 sys 0m0.036s 00:04:50.322 06:10:41 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.323 06:10:41 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:50.323 ************************************ 00:04:50.323 END TEST env_pci 00:04:50.323 ************************************ 00:04:50.323 06:10:41 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:50.323 06:10:41 env -- env/env.sh@15 -- # uname 00:04:50.323 06:10:41 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:50.323 06:10:41 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:50.323 06:10:41 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.323 06:10:41 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:50.323 06:10:41 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.323 06:10:41 env -- common/autotest_common.sh@10 -- # set +x 00:04:50.323 ************************************ 00:04:50.323 START TEST env_dpdk_post_init 00:04:50.323 ************************************ 00:04:50.323 06:10:41 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:50.323 EAL: Detected CPU lcores: 96 00:04:50.323 EAL: Detected NUMA nodes: 2 00:04:50.323 EAL: Detected shared linkage of DPDK 00:04:50.323 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:50.323 EAL: Selected IOVA mode 'VA' 00:04:50.323 EAL: VFIO support initialized 00:04:50.323 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:50.582 EAL: Using IOMMU type 1 (Type 1) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:50.582 EAL: Ignore mapping IO port bar(1) 00:04:50.582 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:51.519 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:51.519 EAL: Ignore mapping IO port bar(1) 00:04:51.519 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:54.887 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:54.887 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:54.887 Starting DPDK initialization... 00:04:54.887 Starting SPDK post initialization... 00:04:54.887 SPDK NVMe probe 00:04:54.887 Attaching to 0000:5e:00.0 00:04:54.887 Attached to 0000:5e:00.0 00:04:54.887 Cleaning up... 00:04:54.887 00:04:54.887 real 0m4.373s 00:04:54.887 user 0m3.267s 00:04:54.887 sys 0m0.174s 00:04:54.887 06:10:46 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.887 06:10:46 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:54.887 ************************************ 00:04:54.887 END TEST env_dpdk_post_init 00:04:54.887 ************************************ 00:04:54.887 06:10:46 env -- env/env.sh@26 -- # uname 00:04:54.887 06:10:46 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:54.887 06:10:46 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.887 06:10:46 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.887 06:10:46 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.887 06:10:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.887 ************************************ 00:04:54.887 START TEST env_mem_callbacks 00:04:54.887 ************************************ 00:04:54.887 06:10:46 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:54.887 EAL: Detected CPU lcores: 96 00:04:54.887 EAL: Detected NUMA nodes: 2 00:04:54.887 EAL: Detected shared linkage of DPDK 00:04:54.887 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:54.887 EAL: Selected IOVA mode 'VA' 00:04:54.887 EAL: VFIO support initialized 00:04:54.887 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:54.887 00:04:54.887 00:04:54.887 CUnit - A unit testing framework for C - Version 2.1-3 00:04:54.887 http://cunit.sourceforge.net/ 00:04:54.887 00:04:54.887 00:04:54.887 Suite: memory 00:04:54.887 Test: test ... 00:04:54.887 register 0x200000200000 2097152 00:04:54.887 malloc 3145728 00:04:54.887 register 0x200000400000 4194304 00:04:54.887 buf 0x200000500000 len 3145728 PASSED 00:04:54.887 malloc 64 00:04:54.887 buf 0x2000004fff40 len 64 PASSED 00:04:54.887 malloc 4194304 00:04:54.887 register 0x200000800000 6291456 00:04:54.887 buf 0x200000a00000 len 4194304 PASSED 00:04:54.887 free 0x200000500000 3145728 00:04:54.887 free 0x2000004fff40 64 00:04:54.887 unregister 0x200000400000 4194304 PASSED 00:04:54.887 free 0x200000a00000 4194304 00:04:54.887 unregister 0x200000800000 6291456 PASSED 00:04:54.887 malloc 8388608 00:04:54.887 register 0x200000400000 10485760 00:04:54.887 buf 0x200000600000 len 8388608 PASSED 00:04:54.887 free 0x200000600000 8388608 00:04:54.887 unregister 0x200000400000 10485760 PASSED 00:04:54.887 passed 00:04:54.887 00:04:54.887 Run Summary: Type Total Ran Passed Failed Inactive 00:04:54.887 suites 1 1 n/a 0 0 00:04:54.887 tests 1 1 1 0 0 00:04:54.887 asserts 15 15 15 0 n/a 00:04:54.887 00:04:54.887 Elapsed time = 0.009 seconds 00:04:54.887 00:04:54.887 real 0m0.061s 00:04:54.887 user 0m0.019s 00:04:54.887 sys 0m0.042s 00:04:54.888 06:10:46 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.888 06:10:46 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 ************************************ 00:04:54.888 END TEST env_mem_callbacks 00:04:54.888 ************************************ 00:04:54.888 00:04:54.888 real 0m6.277s 00:04:54.888 user 0m4.327s 00:04:54.888 sys 0m1.020s 00:04:54.888 06:10:46 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.888 06:10:46 env -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 ************************************ 00:04:54.888 END TEST env 00:04:54.888 ************************************ 00:04:54.888 06:10:46 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:54.888 06:10:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.888 06:10:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.888 06:10:46 -- common/autotest_common.sh@10 -- # set +x 00:04:54.888 ************************************ 00:04:54.888 START TEST rpc 00:04:54.888 ************************************ 00:04:54.888 06:10:46 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:55.177 * Looking for test storage... 00:04:55.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:55.177 06:10:46 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:55.177 06:10:46 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:55.177 06:10:46 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:55.177 06:10:46 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:55.177 06:10:46 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:55.177 06:10:46 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:55.177 06:10:46 rpc -- scripts/common.sh@345 -- # : 1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:55.177 06:10:46 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:55.177 06:10:46 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@353 -- # local d=1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:55.177 06:10:46 rpc -- scripts/common.sh@355 -- # echo 1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:55.177 06:10:46 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@353 -- # local d=2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:55.177 06:10:46 rpc -- scripts/common.sh@355 -- # echo 2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:55.177 06:10:46 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:55.177 06:10:46 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:55.177 06:10:46 rpc -- scripts/common.sh@368 -- # return 0 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.177 --rc genhtml_branch_coverage=1 00:04:55.177 --rc genhtml_function_coverage=1 00:04:55.177 --rc genhtml_legend=1 00:04:55.177 --rc geninfo_all_blocks=1 00:04:55.177 --rc geninfo_unexecuted_blocks=1 00:04:55.177 00:04:55.177 ' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.177 --rc genhtml_branch_coverage=1 00:04:55.177 --rc genhtml_function_coverage=1 00:04:55.177 --rc genhtml_legend=1 00:04:55.177 --rc geninfo_all_blocks=1 00:04:55.177 --rc geninfo_unexecuted_blocks=1 00:04:55.177 00:04:55.177 ' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.177 --rc genhtml_branch_coverage=1 00:04:55.177 --rc genhtml_function_coverage=1 00:04:55.177 --rc genhtml_legend=1 00:04:55.177 --rc geninfo_all_blocks=1 00:04:55.177 --rc geninfo_unexecuted_blocks=1 00:04:55.177 00:04:55.177 ' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:55.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:55.177 --rc genhtml_branch_coverage=1 00:04:55.177 --rc genhtml_function_coverage=1 00:04:55.177 --rc genhtml_legend=1 00:04:55.177 --rc geninfo_all_blocks=1 00:04:55.177 --rc geninfo_unexecuted_blocks=1 00:04:55.177 00:04:55.177 ' 00:04:55.177 06:10:46 rpc -- rpc/rpc.sh@65 -- # spdk_pid=772286 00:04:55.177 06:10:46 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.177 06:10:46 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:55.177 06:10:46 rpc -- rpc/rpc.sh@67 -- # waitforlisten 772286 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@835 -- # '[' -z 772286 ']' 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:55.177 06:10:46 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.177 [2024-12-13 06:10:46.742227] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:55.177 [2024-12-13 06:10:46.742273] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772286 ] 00:04:55.177 [2024-12-13 06:10:46.819601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.486 [2024-12-13 06:10:46.842702] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:55.486 [2024-12-13 06:10:46.842738] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 772286' to capture a snapshot of events at runtime. 00:04:55.486 [2024-12-13 06:10:46.842746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:55.486 [2024-12-13 06:10:46.842752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:55.486 [2024-12-13 06:10:46.842757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid772286 for offline analysis/debug. 00:04:55.486 [2024-12-13 06:10:46.843268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.486 06:10:47 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:55.486 06:10:47 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:55.486 06:10:47 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.486 06:10:47 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:55.486 06:10:47 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:55.486 06:10:47 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:55.486 06:10:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.486 06:10:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.486 06:10:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.486 ************************************ 00:04:55.486 START TEST rpc_integrity 00:04:55.486 ************************************ 00:04:55.486 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:55.486 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:55.486 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.486 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.486 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.486 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:55.486 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:55.756 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:55.756 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.756 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:55.756 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.756 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.756 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:55.756 { 00:04:55.756 "name": "Malloc0", 00:04:55.756 "aliases": [ 00:04:55.756 "5d87b4cb-38f3-4a3a-bc41-fb224113dc41" 00:04:55.756 ], 00:04:55.756 "product_name": "Malloc disk", 00:04:55.756 "block_size": 512, 00:04:55.756 "num_blocks": 16384, 00:04:55.757 "uuid": "5d87b4cb-38f3-4a3a-bc41-fb224113dc41", 00:04:55.757 "assigned_rate_limits": { 00:04:55.757 "rw_ios_per_sec": 0, 00:04:55.757 "rw_mbytes_per_sec": 0, 00:04:55.757 "r_mbytes_per_sec": 0, 00:04:55.757 "w_mbytes_per_sec": 0 00:04:55.757 }, 00:04:55.757 "claimed": false, 00:04:55.757 "zoned": false, 00:04:55.757 "supported_io_types": { 00:04:55.757 "read": true, 00:04:55.757 "write": true, 00:04:55.757 "unmap": true, 00:04:55.757 "flush": true, 00:04:55.757 "reset": true, 00:04:55.757 "nvme_admin": false, 00:04:55.757 "nvme_io": false, 00:04:55.757 "nvme_io_md": false, 00:04:55.757 "write_zeroes": true, 00:04:55.757 "zcopy": true, 00:04:55.757 "get_zone_info": false, 00:04:55.757 "zone_management": false, 00:04:55.757 "zone_append": false, 00:04:55.757 "compare": false, 00:04:55.757 "compare_and_write": false, 00:04:55.757 "abort": true, 00:04:55.757 "seek_hole": false, 00:04:55.757 "seek_data": false, 00:04:55.757 "copy": true, 00:04:55.757 "nvme_iov_md": false 00:04:55.757 }, 00:04:55.757 "memory_domains": [ 00:04:55.757 { 00:04:55.757 "dma_device_id": "system", 00:04:55.757 "dma_device_type": 1 00:04:55.757 }, 00:04:55.757 { 00:04:55.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.757 "dma_device_type": 2 00:04:55.757 } 00:04:55.757 ], 00:04:55.757 "driver_specific": {} 00:04:55.757 } 00:04:55.757 ]' 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 [2024-12-13 06:10:47.224434] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:55.757 [2024-12-13 06:10:47.224469] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:55.757 [2024-12-13 06:10:47.224481] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x103bae0 00:04:55.757 [2024-12-13 06:10:47.224487] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:55.757 [2024-12-13 06:10:47.225539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:55.757 [2024-12-13 06:10:47.225560] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:55.757 Passthru0 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:55.757 { 00:04:55.757 "name": "Malloc0", 00:04:55.757 "aliases": [ 00:04:55.757 "5d87b4cb-38f3-4a3a-bc41-fb224113dc41" 00:04:55.757 ], 00:04:55.757 "product_name": "Malloc disk", 00:04:55.757 "block_size": 512, 00:04:55.757 "num_blocks": 16384, 00:04:55.757 "uuid": "5d87b4cb-38f3-4a3a-bc41-fb224113dc41", 00:04:55.757 "assigned_rate_limits": { 00:04:55.757 "rw_ios_per_sec": 0, 00:04:55.757 "rw_mbytes_per_sec": 0, 00:04:55.757 "r_mbytes_per_sec": 0, 00:04:55.757 "w_mbytes_per_sec": 0 00:04:55.757 }, 00:04:55.757 "claimed": true, 00:04:55.757 "claim_type": "exclusive_write", 00:04:55.757 "zoned": false, 00:04:55.757 "supported_io_types": { 00:04:55.757 "read": true, 00:04:55.757 "write": true, 00:04:55.757 "unmap": true, 00:04:55.757 "flush": true, 00:04:55.757 "reset": true, 00:04:55.757 "nvme_admin": false, 00:04:55.757 "nvme_io": false, 00:04:55.757 "nvme_io_md": false, 00:04:55.757 "write_zeroes": true, 00:04:55.757 "zcopy": true, 00:04:55.757 "get_zone_info": false, 00:04:55.757 "zone_management": false, 00:04:55.757 "zone_append": false, 00:04:55.757 "compare": false, 00:04:55.757 "compare_and_write": false, 00:04:55.757 "abort": true, 00:04:55.757 "seek_hole": false, 00:04:55.757 "seek_data": false, 00:04:55.757 "copy": true, 00:04:55.757 "nvme_iov_md": false 00:04:55.757 }, 00:04:55.757 "memory_domains": [ 00:04:55.757 { 00:04:55.757 "dma_device_id": "system", 00:04:55.757 "dma_device_type": 1 00:04:55.757 }, 00:04:55.757 { 00:04:55.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.757 "dma_device_type": 2 00:04:55.757 } 00:04:55.757 ], 00:04:55.757 "driver_specific": {} 00:04:55.757 }, 00:04:55.757 { 00:04:55.757 "name": "Passthru0", 00:04:55.757 "aliases": [ 00:04:55.757 "ea896bd4-dc88-5de4-8c63-235066f83b39" 00:04:55.757 ], 00:04:55.757 "product_name": "passthru", 00:04:55.757 "block_size": 512, 00:04:55.757 "num_blocks": 16384, 00:04:55.757 "uuid": "ea896bd4-dc88-5de4-8c63-235066f83b39", 00:04:55.757 "assigned_rate_limits": { 00:04:55.757 "rw_ios_per_sec": 0, 00:04:55.757 "rw_mbytes_per_sec": 0, 00:04:55.757 "r_mbytes_per_sec": 0, 00:04:55.757 "w_mbytes_per_sec": 0 00:04:55.757 }, 00:04:55.757 "claimed": false, 00:04:55.757 "zoned": false, 00:04:55.757 "supported_io_types": { 00:04:55.757 "read": true, 00:04:55.757 "write": true, 00:04:55.757 "unmap": true, 00:04:55.757 "flush": true, 00:04:55.757 "reset": true, 00:04:55.757 "nvme_admin": false, 00:04:55.757 "nvme_io": false, 00:04:55.757 "nvme_io_md": false, 00:04:55.757 "write_zeroes": true, 00:04:55.757 "zcopy": true, 00:04:55.757 "get_zone_info": false, 00:04:55.757 "zone_management": false, 00:04:55.757 "zone_append": false, 00:04:55.757 "compare": false, 00:04:55.757 "compare_and_write": false, 00:04:55.757 "abort": true, 00:04:55.757 "seek_hole": false, 00:04:55.757 "seek_data": false, 00:04:55.757 "copy": true, 00:04:55.757 "nvme_iov_md": false 00:04:55.757 }, 00:04:55.757 "memory_domains": [ 00:04:55.757 { 00:04:55.757 "dma_device_id": "system", 00:04:55.757 "dma_device_type": 1 00:04:55.757 }, 00:04:55.757 { 00:04:55.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:55.757 "dma_device_type": 2 00:04:55.757 } 00:04:55.757 ], 00:04:55.757 "driver_specific": { 00:04:55.757 "passthru": { 00:04:55.757 "name": "Passthru0", 00:04:55.757 "base_bdev_name": "Malloc0" 00:04:55.757 } 00:04:55.757 } 00:04:55.757 } 00:04:55.757 ]' 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:55.757 06:10:47 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:55.757 00:04:55.757 real 0m0.282s 00:04:55.757 user 0m0.173s 00:04:55.757 sys 0m0.042s 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.757 06:10:47 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:55.757 ************************************ 00:04:55.757 END TEST rpc_integrity 00:04:55.757 ************************************ 00:04:55.757 06:10:47 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:55.757 06:10:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:55.757 06:10:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.757 06:10:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.016 ************************************ 00:04:56.016 START TEST rpc_plugins 00:04:56.016 ************************************ 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:56.016 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.016 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:56.016 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.016 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.016 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:56.016 { 00:04:56.016 "name": "Malloc1", 00:04:56.016 "aliases": [ 00:04:56.016 "8540a63f-fea3-489a-a85e-215bddbda7ef" 00:04:56.016 ], 00:04:56.016 "product_name": "Malloc disk", 00:04:56.016 "block_size": 4096, 00:04:56.016 "num_blocks": 256, 00:04:56.016 "uuid": "8540a63f-fea3-489a-a85e-215bddbda7ef", 00:04:56.016 "assigned_rate_limits": { 00:04:56.016 "rw_ios_per_sec": 0, 00:04:56.016 "rw_mbytes_per_sec": 0, 00:04:56.016 "r_mbytes_per_sec": 0, 00:04:56.016 "w_mbytes_per_sec": 0 00:04:56.016 }, 00:04:56.016 "claimed": false, 00:04:56.016 "zoned": false, 00:04:56.016 "supported_io_types": { 00:04:56.016 "read": true, 00:04:56.016 "write": true, 00:04:56.016 "unmap": true, 00:04:56.016 "flush": true, 00:04:56.016 "reset": true, 00:04:56.016 "nvme_admin": false, 00:04:56.016 "nvme_io": false, 00:04:56.016 "nvme_io_md": false, 00:04:56.016 "write_zeroes": true, 00:04:56.016 "zcopy": true, 00:04:56.016 "get_zone_info": false, 00:04:56.016 "zone_management": false, 00:04:56.016 "zone_append": false, 00:04:56.016 "compare": false, 00:04:56.016 "compare_and_write": false, 00:04:56.016 "abort": true, 00:04:56.016 "seek_hole": false, 00:04:56.016 "seek_data": false, 00:04:56.016 "copy": true, 00:04:56.016 "nvme_iov_md": false 00:04:56.016 }, 00:04:56.016 "memory_domains": [ 00:04:56.016 { 00:04:56.016 "dma_device_id": "system", 00:04:56.016 "dma_device_type": 1 00:04:56.016 }, 00:04:56.016 { 00:04:56.017 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.017 "dma_device_type": 2 00:04:56.017 } 00:04:56.017 ], 00:04:56.017 "driver_specific": {} 00:04:56.017 } 00:04:56.017 ]' 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:56.017 06:10:47 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:56.017 00:04:56.017 real 0m0.142s 00:04:56.017 user 0m0.085s 00:04:56.017 sys 0m0.022s 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.017 06:10:47 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 ************************************ 00:04:56.017 END TEST rpc_plugins 00:04:56.017 ************************************ 00:04:56.017 06:10:47 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:56.017 06:10:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.017 06:10:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.017 06:10:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 ************************************ 00:04:56.017 START TEST rpc_trace_cmd_test 00:04:56.017 ************************************ 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.017 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.275 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:56.275 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid772286", 00:04:56.275 "tpoint_group_mask": "0x8", 00:04:56.275 "iscsi_conn": { 00:04:56.275 "mask": "0x2", 00:04:56.275 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "scsi": { 00:04:56.276 "mask": "0x4", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "bdev": { 00:04:56.276 "mask": "0x8", 00:04:56.276 "tpoint_mask": "0xffffffffffffffff" 00:04:56.276 }, 00:04:56.276 "nvmf_rdma": { 00:04:56.276 "mask": "0x10", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "nvmf_tcp": { 00:04:56.276 "mask": "0x20", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "ftl": { 00:04:56.276 "mask": "0x40", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "blobfs": { 00:04:56.276 "mask": "0x80", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "dsa": { 00:04:56.276 "mask": "0x200", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "thread": { 00:04:56.276 "mask": "0x400", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "nvme_pcie": { 00:04:56.276 "mask": "0x800", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "iaa": { 00:04:56.276 "mask": "0x1000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "nvme_tcp": { 00:04:56.276 "mask": "0x2000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "bdev_nvme": { 00:04:56.276 "mask": "0x4000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "sock": { 00:04:56.276 "mask": "0x8000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "blob": { 00:04:56.276 "mask": "0x10000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "bdev_raid": { 00:04:56.276 "mask": "0x20000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 }, 00:04:56.276 "scheduler": { 00:04:56.276 "mask": "0x40000", 00:04:56.276 "tpoint_mask": "0x0" 00:04:56.276 } 00:04:56.276 }' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:56.276 00:04:56.276 real 0m0.224s 00:04:56.276 user 0m0.189s 00:04:56.276 sys 0m0.028s 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.276 06:10:47 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:56.276 ************************************ 00:04:56.276 END TEST rpc_trace_cmd_test 00:04:56.276 ************************************ 00:04:56.276 06:10:47 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:56.276 06:10:47 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:56.276 06:10:47 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:56.276 06:10:47 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.276 06:10:47 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.276 06:10:47 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 ************************************ 00:04:56.535 START TEST rpc_daemon_integrity 00:04:56.535 ************************************ 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 06:10:47 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:56.535 { 00:04:56.535 "name": "Malloc2", 00:04:56.535 "aliases": [ 00:04:56.535 "23e5ac40-9683-4e7e-8ce9-640776b4d961" 00:04:56.535 ], 00:04:56.535 "product_name": "Malloc disk", 00:04:56.535 "block_size": 512, 00:04:56.535 "num_blocks": 16384, 00:04:56.535 "uuid": "23e5ac40-9683-4e7e-8ce9-640776b4d961", 00:04:56.535 "assigned_rate_limits": { 00:04:56.535 "rw_ios_per_sec": 0, 00:04:56.535 "rw_mbytes_per_sec": 0, 00:04:56.535 "r_mbytes_per_sec": 0, 00:04:56.535 "w_mbytes_per_sec": 0 00:04:56.535 }, 00:04:56.535 "claimed": false, 00:04:56.535 "zoned": false, 00:04:56.535 "supported_io_types": { 00:04:56.535 "read": true, 00:04:56.535 "write": true, 00:04:56.535 "unmap": true, 00:04:56.535 "flush": true, 00:04:56.535 "reset": true, 00:04:56.535 "nvme_admin": false, 00:04:56.535 "nvme_io": false, 00:04:56.535 "nvme_io_md": false, 00:04:56.535 "write_zeroes": true, 00:04:56.535 "zcopy": true, 00:04:56.535 "get_zone_info": false, 00:04:56.535 "zone_management": false, 00:04:56.535 "zone_append": false, 00:04:56.535 "compare": false, 00:04:56.535 "compare_and_write": false, 00:04:56.535 "abort": true, 00:04:56.535 "seek_hole": false, 00:04:56.535 "seek_data": false, 00:04:56.535 "copy": true, 00:04:56.535 "nvme_iov_md": false 00:04:56.535 }, 00:04:56.535 "memory_domains": [ 00:04:56.535 { 00:04:56.535 "dma_device_id": "system", 00:04:56.535 "dma_device_type": 1 00:04:56.535 }, 00:04:56.535 { 00:04:56.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.535 "dma_device_type": 2 00:04:56.535 } 00:04:56.535 ], 00:04:56.535 "driver_specific": {} 00:04:56.535 } 00:04:56.535 ]' 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 [2024-12-13 06:10:48.070726] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:56.535 [2024-12-13 06:10:48.070752] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:56.535 [2024-12-13 06:10:48.070767] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xef9f80 00:04:56.535 [2024-12-13 06:10:48.070773] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:56.535 [2024-12-13 06:10:48.071729] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:56.535 [2024-12-13 06:10:48.071751] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:56.535 Passthru0 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.535 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:56.535 { 00:04:56.535 "name": "Malloc2", 00:04:56.535 "aliases": [ 00:04:56.535 "23e5ac40-9683-4e7e-8ce9-640776b4d961" 00:04:56.535 ], 00:04:56.535 "product_name": "Malloc disk", 00:04:56.535 "block_size": 512, 00:04:56.535 "num_blocks": 16384, 00:04:56.535 "uuid": "23e5ac40-9683-4e7e-8ce9-640776b4d961", 00:04:56.535 "assigned_rate_limits": { 00:04:56.535 "rw_ios_per_sec": 0, 00:04:56.535 "rw_mbytes_per_sec": 0, 00:04:56.535 "r_mbytes_per_sec": 0, 00:04:56.535 "w_mbytes_per_sec": 0 00:04:56.535 }, 00:04:56.535 "claimed": true, 00:04:56.535 "claim_type": "exclusive_write", 00:04:56.535 "zoned": false, 00:04:56.535 "supported_io_types": { 00:04:56.535 "read": true, 00:04:56.535 "write": true, 00:04:56.535 "unmap": true, 00:04:56.535 "flush": true, 00:04:56.535 "reset": true, 00:04:56.535 "nvme_admin": false, 00:04:56.535 "nvme_io": false, 00:04:56.535 "nvme_io_md": false, 00:04:56.535 "write_zeroes": true, 00:04:56.535 "zcopy": true, 00:04:56.535 "get_zone_info": false, 00:04:56.535 "zone_management": false, 00:04:56.535 "zone_append": false, 00:04:56.535 "compare": false, 00:04:56.535 "compare_and_write": false, 00:04:56.535 "abort": true, 00:04:56.535 "seek_hole": false, 00:04:56.535 "seek_data": false, 00:04:56.535 "copy": true, 00:04:56.535 "nvme_iov_md": false 00:04:56.535 }, 00:04:56.535 "memory_domains": [ 00:04:56.535 { 00:04:56.535 "dma_device_id": "system", 00:04:56.535 "dma_device_type": 1 00:04:56.535 }, 00:04:56.535 { 00:04:56.535 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.535 "dma_device_type": 2 00:04:56.535 } 00:04:56.535 ], 00:04:56.535 "driver_specific": {} 00:04:56.536 }, 00:04:56.536 { 00:04:56.536 "name": "Passthru0", 00:04:56.536 "aliases": [ 00:04:56.536 "8503cd19-3331-5c7c-a826-7a71188069d6" 00:04:56.536 ], 00:04:56.536 "product_name": "passthru", 00:04:56.536 "block_size": 512, 00:04:56.536 "num_blocks": 16384, 00:04:56.536 "uuid": "8503cd19-3331-5c7c-a826-7a71188069d6", 00:04:56.536 "assigned_rate_limits": { 00:04:56.536 "rw_ios_per_sec": 0, 00:04:56.536 "rw_mbytes_per_sec": 0, 00:04:56.536 "r_mbytes_per_sec": 0, 00:04:56.536 "w_mbytes_per_sec": 0 00:04:56.536 }, 00:04:56.536 "claimed": false, 00:04:56.536 "zoned": false, 00:04:56.536 "supported_io_types": { 00:04:56.536 "read": true, 00:04:56.536 "write": true, 00:04:56.536 "unmap": true, 00:04:56.536 "flush": true, 00:04:56.536 "reset": true, 00:04:56.536 "nvme_admin": false, 00:04:56.536 "nvme_io": false, 00:04:56.536 "nvme_io_md": false, 00:04:56.536 "write_zeroes": true, 00:04:56.536 "zcopy": true, 00:04:56.536 "get_zone_info": false, 00:04:56.536 "zone_management": false, 00:04:56.536 "zone_append": false, 00:04:56.536 "compare": false, 00:04:56.536 "compare_and_write": false, 00:04:56.536 "abort": true, 00:04:56.536 "seek_hole": false, 00:04:56.536 "seek_data": false, 00:04:56.536 "copy": true, 00:04:56.536 "nvme_iov_md": false 00:04:56.536 }, 00:04:56.536 "memory_domains": [ 00:04:56.536 { 00:04:56.536 "dma_device_id": "system", 00:04:56.536 "dma_device_type": 1 00:04:56.536 }, 00:04:56.536 { 00:04:56.536 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:56.536 "dma_device_type": 2 00:04:56.536 } 00:04:56.536 ], 00:04:56.536 "driver_specific": { 00:04:56.536 "passthru": { 00:04:56.536 "name": "Passthru0", 00:04:56.536 "base_bdev_name": "Malloc2" 00:04:56.536 } 00:04:56.536 } 00:04:56.536 } 00:04:56.536 ]' 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:56.536 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:56.795 06:10:48 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:56.795 00:04:56.795 real 0m0.273s 00:04:56.795 user 0m0.178s 00:04:56.795 sys 0m0.035s 00:04:56.795 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.795 06:10:48 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:56.795 ************************************ 00:04:56.795 END TEST rpc_daemon_integrity 00:04:56.795 ************************************ 00:04:56.795 06:10:48 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:56.795 06:10:48 rpc -- rpc/rpc.sh@84 -- # killprocess 772286 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@954 -- # '[' -z 772286 ']' 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@958 -- # kill -0 772286 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@959 -- # uname 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772286 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772286' 00:04:56.795 killing process with pid 772286 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@973 -- # kill 772286 00:04:56.795 06:10:48 rpc -- common/autotest_common.sh@978 -- # wait 772286 00:04:57.055 00:04:57.055 real 0m2.077s 00:04:57.055 user 0m2.653s 00:04:57.055 sys 0m0.713s 00:04:57.055 06:10:48 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.055 06:10:48 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.055 ************************************ 00:04:57.055 END TEST rpc 00:04:57.055 ************************************ 00:04:57.055 06:10:48 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.055 06:10:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.055 06:10:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.055 06:10:48 -- common/autotest_common.sh@10 -- # set +x 00:04:57.055 ************************************ 00:04:57.055 START TEST skip_rpc 00:04:57.055 ************************************ 00:04:57.055 06:10:48 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:57.315 * Looking for test storage... 00:04:57.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.315 06:10:48 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.315 --rc genhtml_branch_coverage=1 00:04:57.315 --rc genhtml_function_coverage=1 00:04:57.315 --rc genhtml_legend=1 00:04:57.315 --rc geninfo_all_blocks=1 00:04:57.315 --rc geninfo_unexecuted_blocks=1 00:04:57.315 00:04:57.315 ' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.315 --rc genhtml_branch_coverage=1 00:04:57.315 --rc genhtml_function_coverage=1 00:04:57.315 --rc genhtml_legend=1 00:04:57.315 --rc geninfo_all_blocks=1 00:04:57.315 --rc geninfo_unexecuted_blocks=1 00:04:57.315 00:04:57.315 ' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.315 --rc genhtml_branch_coverage=1 00:04:57.315 --rc genhtml_function_coverage=1 00:04:57.315 --rc genhtml_legend=1 00:04:57.315 --rc geninfo_all_blocks=1 00:04:57.315 --rc geninfo_unexecuted_blocks=1 00:04:57.315 00:04:57.315 ' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:57.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.315 --rc genhtml_branch_coverage=1 00:04:57.315 --rc genhtml_function_coverage=1 00:04:57.315 --rc genhtml_legend=1 00:04:57.315 --rc geninfo_all_blocks=1 00:04:57.315 --rc geninfo_unexecuted_blocks=1 00:04:57.315 00:04:57.315 ' 00:04:57.315 06:10:48 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:57.315 06:10:48 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:57.315 06:10:48 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.315 06:10:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.315 ************************************ 00:04:57.315 START TEST skip_rpc 00:04:57.315 ************************************ 00:04:57.315 06:10:48 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:57.315 06:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=772913 00:04:57.315 06:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.315 06:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:57.315 06:10:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:57.315 [2024-12-13 06:10:48.925798] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:04:57.315 [2024-12-13 06:10:48.925834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772913 ] 00:04:57.574 [2024-12-13 06:10:48.998466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.574 [2024-12-13 06:10:49.020400] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 772913 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 772913 ']' 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 772913 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 772913 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 772913' 00:05:02.841 killing process with pid 772913 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 772913 00:05:02.841 06:10:53 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 772913 00:05:02.841 00:05:02.841 real 0m5.352s 00:05:02.841 user 0m5.102s 00:05:02.841 sys 0m0.286s 00:05:02.841 06:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.841 06:10:54 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.841 ************************************ 00:05:02.841 END TEST skip_rpc 00:05:02.841 ************************************ 00:05:02.841 06:10:54 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:02.841 06:10:54 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.841 06:10:54 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.841 06:10:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.841 ************************************ 00:05:02.841 START TEST skip_rpc_with_json 00:05:02.841 ************************************ 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=773835 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:02.841 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 773835 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 773835 ']' 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:02.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.842 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:02.842 [2024-12-13 06:10:54.347480] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:02.842 [2024-12-13 06:10:54.347524] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773835 ] 00:05:02.842 [2024-12-13 06:10:54.420806] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.842 [2024-12-13 06:10:54.443317] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.100 [2024-12-13 06:10:54.647390] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:03.100 request: 00:05:03.100 { 00:05:03.100 "trtype": "tcp", 00:05:03.100 "method": "nvmf_get_transports", 00:05:03.100 "req_id": 1 00:05:03.100 } 00:05:03.100 Got JSON-RPC error response 00:05:03.100 response: 00:05:03.100 { 00:05:03.100 "code": -19, 00:05:03.100 "message": "No such device" 00:05:03.100 } 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.100 [2024-12-13 06:10:54.659495] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:03.100 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:03.359 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:03.359 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.359 { 00:05:03.359 "subsystems": [ 00:05:03.359 { 00:05:03.359 "subsystem": "fsdev", 00:05:03.359 "config": [ 00:05:03.359 { 00:05:03.359 "method": "fsdev_set_opts", 00:05:03.359 "params": { 00:05:03.359 "fsdev_io_pool_size": 65535, 00:05:03.359 "fsdev_io_cache_size": 256 00:05:03.359 } 00:05:03.359 } 00:05:03.359 ] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "vfio_user_target", 00:05:03.359 "config": null 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "keyring", 00:05:03.359 "config": [] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "iobuf", 00:05:03.359 "config": [ 00:05:03.359 { 00:05:03.359 "method": "iobuf_set_options", 00:05:03.359 "params": { 00:05:03.359 "small_pool_count": 8192, 00:05:03.359 "large_pool_count": 1024, 00:05:03.359 "small_bufsize": 8192, 00:05:03.359 "large_bufsize": 135168, 00:05:03.359 "enable_numa": false 00:05:03.359 } 00:05:03.359 } 00:05:03.359 ] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "sock", 00:05:03.359 "config": [ 00:05:03.359 { 00:05:03.359 "method": "sock_set_default_impl", 00:05:03.359 "params": { 00:05:03.359 "impl_name": "posix" 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "sock_impl_set_options", 00:05:03.359 "params": { 00:05:03.359 "impl_name": "ssl", 00:05:03.359 "recv_buf_size": 4096, 00:05:03.359 "send_buf_size": 4096, 00:05:03.359 "enable_recv_pipe": true, 00:05:03.359 "enable_quickack": false, 00:05:03.359 "enable_placement_id": 0, 00:05:03.359 "enable_zerocopy_send_server": true, 00:05:03.359 "enable_zerocopy_send_client": false, 00:05:03.359 "zerocopy_threshold": 0, 00:05:03.359 "tls_version": 0, 00:05:03.359 "enable_ktls": false 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "sock_impl_set_options", 00:05:03.359 "params": { 00:05:03.359 "impl_name": "posix", 00:05:03.359 "recv_buf_size": 2097152, 00:05:03.359 "send_buf_size": 2097152, 00:05:03.359 "enable_recv_pipe": true, 00:05:03.359 "enable_quickack": false, 00:05:03.359 "enable_placement_id": 0, 00:05:03.359 "enable_zerocopy_send_server": true, 00:05:03.359 "enable_zerocopy_send_client": false, 00:05:03.359 "zerocopy_threshold": 0, 00:05:03.359 "tls_version": 0, 00:05:03.359 "enable_ktls": false 00:05:03.359 } 00:05:03.359 } 00:05:03.359 ] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "vmd", 00:05:03.359 "config": [] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "accel", 00:05:03.359 "config": [ 00:05:03.359 { 00:05:03.359 "method": "accel_set_options", 00:05:03.359 "params": { 00:05:03.359 "small_cache_size": 128, 00:05:03.359 "large_cache_size": 16, 00:05:03.359 "task_count": 2048, 00:05:03.359 "sequence_count": 2048, 00:05:03.359 "buf_count": 2048 00:05:03.359 } 00:05:03.359 } 00:05:03.359 ] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "bdev", 00:05:03.359 "config": [ 00:05:03.359 { 00:05:03.359 "method": "bdev_set_options", 00:05:03.359 "params": { 00:05:03.359 "bdev_io_pool_size": 65535, 00:05:03.359 "bdev_io_cache_size": 256, 00:05:03.359 "bdev_auto_examine": true, 00:05:03.359 "iobuf_small_cache_size": 128, 00:05:03.359 "iobuf_large_cache_size": 16 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "bdev_raid_set_options", 00:05:03.359 "params": { 00:05:03.359 "process_window_size_kb": 1024, 00:05:03.359 "process_max_bandwidth_mb_sec": 0 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "bdev_iscsi_set_options", 00:05:03.359 "params": { 00:05:03.359 "timeout_sec": 30 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "bdev_nvme_set_options", 00:05:03.359 "params": { 00:05:03.359 "action_on_timeout": "none", 00:05:03.359 "timeout_us": 0, 00:05:03.359 "timeout_admin_us": 0, 00:05:03.359 "keep_alive_timeout_ms": 10000, 00:05:03.359 "arbitration_burst": 0, 00:05:03.359 "low_priority_weight": 0, 00:05:03.359 "medium_priority_weight": 0, 00:05:03.359 "high_priority_weight": 0, 00:05:03.359 "nvme_adminq_poll_period_us": 10000, 00:05:03.359 "nvme_ioq_poll_period_us": 0, 00:05:03.359 "io_queue_requests": 0, 00:05:03.359 "delay_cmd_submit": true, 00:05:03.359 "transport_retry_count": 4, 00:05:03.359 "bdev_retry_count": 3, 00:05:03.359 "transport_ack_timeout": 0, 00:05:03.359 "ctrlr_loss_timeout_sec": 0, 00:05:03.359 "reconnect_delay_sec": 0, 00:05:03.359 "fast_io_fail_timeout_sec": 0, 00:05:03.359 "disable_auto_failback": false, 00:05:03.359 "generate_uuids": false, 00:05:03.359 "transport_tos": 0, 00:05:03.359 "nvme_error_stat": false, 00:05:03.359 "rdma_srq_size": 0, 00:05:03.359 "io_path_stat": false, 00:05:03.359 "allow_accel_sequence": false, 00:05:03.359 "rdma_max_cq_size": 0, 00:05:03.359 "rdma_cm_event_timeout_ms": 0, 00:05:03.359 "dhchap_digests": [ 00:05:03.359 "sha256", 00:05:03.359 "sha384", 00:05:03.359 "sha512" 00:05:03.359 ], 00:05:03.359 "dhchap_dhgroups": [ 00:05:03.359 "null", 00:05:03.359 "ffdhe2048", 00:05:03.359 "ffdhe3072", 00:05:03.359 "ffdhe4096", 00:05:03.359 "ffdhe6144", 00:05:03.359 "ffdhe8192" 00:05:03.359 ], 00:05:03.359 "rdma_umr_per_io": false 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "bdev_nvme_set_hotplug", 00:05:03.359 "params": { 00:05:03.359 "period_us": 100000, 00:05:03.359 "enable": false 00:05:03.359 } 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "method": "bdev_wait_for_examine" 00:05:03.359 } 00:05:03.359 ] 00:05:03.359 }, 00:05:03.359 { 00:05:03.359 "subsystem": "scsi", 00:05:03.360 "config": null 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "scheduler", 00:05:03.360 "config": [ 00:05:03.360 { 00:05:03.360 "method": "framework_set_scheduler", 00:05:03.360 "params": { 00:05:03.360 "name": "static" 00:05:03.360 } 00:05:03.360 } 00:05:03.360 ] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "vhost_scsi", 00:05:03.360 "config": [] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "vhost_blk", 00:05:03.360 "config": [] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "ublk", 00:05:03.360 "config": [] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "nbd", 00:05:03.360 "config": [] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "nvmf", 00:05:03.360 "config": [ 00:05:03.360 { 00:05:03.360 "method": "nvmf_set_config", 00:05:03.360 "params": { 00:05:03.360 "discovery_filter": "match_any", 00:05:03.360 "admin_cmd_passthru": { 00:05:03.360 "identify_ctrlr": false 00:05:03.360 }, 00:05:03.360 "dhchap_digests": [ 00:05:03.360 "sha256", 00:05:03.360 "sha384", 00:05:03.360 "sha512" 00:05:03.360 ], 00:05:03.360 "dhchap_dhgroups": [ 00:05:03.360 "null", 00:05:03.360 "ffdhe2048", 00:05:03.360 "ffdhe3072", 00:05:03.360 "ffdhe4096", 00:05:03.360 "ffdhe6144", 00:05:03.360 "ffdhe8192" 00:05:03.360 ] 00:05:03.360 } 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "method": "nvmf_set_max_subsystems", 00:05:03.360 "params": { 00:05:03.360 "max_subsystems": 1024 00:05:03.360 } 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "method": "nvmf_set_crdt", 00:05:03.360 "params": { 00:05:03.360 "crdt1": 0, 00:05:03.360 "crdt2": 0, 00:05:03.360 "crdt3": 0 00:05:03.360 } 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "method": "nvmf_create_transport", 00:05:03.360 "params": { 00:05:03.360 "trtype": "TCP", 00:05:03.360 "max_queue_depth": 128, 00:05:03.360 "max_io_qpairs_per_ctrlr": 127, 00:05:03.360 "in_capsule_data_size": 4096, 00:05:03.360 "max_io_size": 131072, 00:05:03.360 "io_unit_size": 131072, 00:05:03.360 "max_aq_depth": 128, 00:05:03.360 "num_shared_buffers": 511, 00:05:03.360 "buf_cache_size": 4294967295, 00:05:03.360 "dif_insert_or_strip": false, 00:05:03.360 "zcopy": false, 00:05:03.360 "c2h_success": true, 00:05:03.360 "sock_priority": 0, 00:05:03.360 "abort_timeout_sec": 1, 00:05:03.360 "ack_timeout": 0, 00:05:03.360 "data_wr_pool_size": 0 00:05:03.360 } 00:05:03.360 } 00:05:03.360 ] 00:05:03.360 }, 00:05:03.360 { 00:05:03.360 "subsystem": "iscsi", 00:05:03.360 "config": [ 00:05:03.360 { 00:05:03.360 "method": "iscsi_set_options", 00:05:03.360 "params": { 00:05:03.360 "node_base": "iqn.2016-06.io.spdk", 00:05:03.360 "max_sessions": 128, 00:05:03.360 "max_connections_per_session": 2, 00:05:03.360 "max_queue_depth": 64, 00:05:03.360 "default_time2wait": 2, 00:05:03.360 "default_time2retain": 20, 00:05:03.360 "first_burst_length": 8192, 00:05:03.360 "immediate_data": true, 00:05:03.360 "allow_duplicated_isid": false, 00:05:03.360 "error_recovery_level": 0, 00:05:03.360 "nop_timeout": 60, 00:05:03.360 "nop_in_interval": 30, 00:05:03.360 "disable_chap": false, 00:05:03.360 "require_chap": false, 00:05:03.360 "mutual_chap": false, 00:05:03.360 "chap_group": 0, 00:05:03.360 "max_large_datain_per_connection": 64, 00:05:03.360 "max_r2t_per_connection": 4, 00:05:03.360 "pdu_pool_size": 36864, 00:05:03.360 "immediate_data_pool_size": 16384, 00:05:03.360 "data_out_pool_size": 2048 00:05:03.360 } 00:05:03.360 } 00:05:03.360 ] 00:05:03.360 } 00:05:03.360 ] 00:05:03.360 } 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 773835 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773835 ']' 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773835 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773835 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773835' 00:05:03.360 killing process with pid 773835 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773835 00:05:03.360 06:10:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773835 00:05:03.619 06:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=773990 00:05:03.619 06:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:03.619 06:10:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 773990 ']' 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 773990' 00:05:08.891 killing process with pid 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 773990 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:08.891 00:05:08.891 real 0m6.241s 00:05:08.891 user 0m5.940s 00:05:08.891 sys 0m0.600s 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.891 06:11:00 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:08.891 ************************************ 00:05:08.891 END TEST skip_rpc_with_json 00:05:08.891 ************************************ 00:05:09.150 06:11:00 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:09.150 06:11:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.150 06:11:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.150 06:11:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.150 ************************************ 00:05:09.150 START TEST skip_rpc_with_delay 00:05:09.150 ************************************ 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.150 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:09.151 [2024-12-13 06:11:00.657125] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.151 00:05:09.151 real 0m0.069s 00:05:09.151 user 0m0.040s 00:05:09.151 sys 0m0.029s 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.151 06:11:00 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 ************************************ 00:05:09.151 END TEST skip_rpc_with_delay 00:05:09.151 ************************************ 00:05:09.151 06:11:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:09.151 06:11:00 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:09.151 06:11:00 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:09.151 06:11:00 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.151 06:11:00 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.151 06:11:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 ************************************ 00:05:09.151 START TEST exit_on_failed_rpc_init 00:05:09.151 ************************************ 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=775017 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 775017 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 775017 ']' 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.151 06:11:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:09.151 [2024-12-13 06:11:00.794337] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:09.151 [2024-12-13 06:11:00.794379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775017 ] 00:05:09.410 [2024-12-13 06:11:00.869233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.410 [2024-12-13 06:11:00.892017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:09.669 [2024-12-13 06:11:01.140368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:09.669 [2024-12-13 06:11:01.140414] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775024 ] 00:05:09.669 [2024-12-13 06:11:01.212998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.669 [2024-12-13 06:11:01.235246] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.669 [2024-12-13 06:11:01.235301] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:09.669 [2024-12-13 06:11:01.235310] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:09.669 [2024-12-13 06:11:01.235316] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 775017 00:05:09.669 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 775017 ']' 00:05:09.670 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 775017 00:05:09.670 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:09.670 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:09.670 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 775017 00:05:09.929 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:09.929 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:09.929 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 775017' 00:05:09.929 killing process with pid 775017 00:05:09.929 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 775017 00:05:09.929 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 775017 00:05:10.188 00:05:10.188 real 0m0.872s 00:05:10.188 user 0m0.928s 00:05:10.188 sys 0m0.364s 00:05:10.188 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.188 06:11:01 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:10.188 ************************************ 00:05:10.188 END TEST exit_on_failed_rpc_init 00:05:10.188 ************************************ 00:05:10.188 06:11:01 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:10.188 00:05:10.188 real 0m12.988s 00:05:10.188 user 0m12.206s 00:05:10.188 sys 0m1.568s 00:05:10.188 06:11:01 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.188 06:11:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.188 ************************************ 00:05:10.188 END TEST skip_rpc 00:05:10.188 ************************************ 00:05:10.188 06:11:01 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.188 06:11:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.188 06:11:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.188 06:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:10.188 ************************************ 00:05:10.188 START TEST rpc_client 00:05:10.188 ************************************ 00:05:10.188 06:11:01 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:10.188 * Looking for test storage... 00:05:10.188 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:10.188 06:11:01 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.188 06:11:01 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.188 06:11:01 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.447 06:11:01 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.447 06:11:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.448 06:11:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.448 --rc genhtml_branch_coverage=1 00:05:10.448 --rc genhtml_function_coverage=1 00:05:10.448 --rc genhtml_legend=1 00:05:10.448 --rc geninfo_all_blocks=1 00:05:10.448 --rc geninfo_unexecuted_blocks=1 00:05:10.448 00:05:10.448 ' 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.448 --rc genhtml_branch_coverage=1 00:05:10.448 --rc genhtml_function_coverage=1 00:05:10.448 --rc genhtml_legend=1 00:05:10.448 --rc geninfo_all_blocks=1 00:05:10.448 --rc geninfo_unexecuted_blocks=1 00:05:10.448 00:05:10.448 ' 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.448 --rc genhtml_branch_coverage=1 00:05:10.448 --rc genhtml_function_coverage=1 00:05:10.448 --rc genhtml_legend=1 00:05:10.448 --rc geninfo_all_blocks=1 00:05:10.448 --rc geninfo_unexecuted_blocks=1 00:05:10.448 00:05:10.448 ' 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.448 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.448 --rc genhtml_branch_coverage=1 00:05:10.448 --rc genhtml_function_coverage=1 00:05:10.448 --rc genhtml_legend=1 00:05:10.448 --rc geninfo_all_blocks=1 00:05:10.448 --rc geninfo_unexecuted_blocks=1 00:05:10.448 00:05:10.448 ' 00:05:10.448 06:11:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:10.448 OK 00:05:10.448 06:11:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:10.448 00:05:10.448 real 0m0.203s 00:05:10.448 user 0m0.117s 00:05:10.448 sys 0m0.099s 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.448 06:11:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:10.448 ************************************ 00:05:10.448 END TEST rpc_client 00:05:10.448 ************************************ 00:05:10.448 06:11:01 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.448 06:11:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.448 06:11:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.448 06:11:01 -- common/autotest_common.sh@10 -- # set +x 00:05:10.448 ************************************ 00:05:10.448 START TEST json_config 00:05:10.448 ************************************ 00:05:10.448 06:11:01 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:10.448 06:11:02 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:10.448 06:11:02 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:10.448 06:11:02 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:10.708 06:11:02 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:10.708 06:11:02 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:10.708 06:11:02 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:10.708 06:11:02 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:10.708 06:11:02 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:10.708 06:11:02 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:10.708 06:11:02 json_config -- scripts/common.sh@345 -- # : 1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:10.708 06:11:02 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:10.708 06:11:02 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@353 -- # local d=1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:10.708 06:11:02 json_config -- scripts/common.sh@355 -- # echo 1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:10.708 06:11:02 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@353 -- # local d=2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:10.708 06:11:02 json_config -- scripts/common.sh@355 -- # echo 2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:10.708 06:11:02 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:10.708 06:11:02 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:10.708 06:11:02 json_config -- scripts/common.sh@368 -- # return 0 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.708 --rc genhtml_branch_coverage=1 00:05:10.708 --rc genhtml_function_coverage=1 00:05:10.708 --rc genhtml_legend=1 00:05:10.708 --rc geninfo_all_blocks=1 00:05:10.708 --rc geninfo_unexecuted_blocks=1 00:05:10.708 00:05:10.708 ' 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.708 --rc genhtml_branch_coverage=1 00:05:10.708 --rc genhtml_function_coverage=1 00:05:10.708 --rc genhtml_legend=1 00:05:10.708 --rc geninfo_all_blocks=1 00:05:10.708 --rc geninfo_unexecuted_blocks=1 00:05:10.708 00:05:10.708 ' 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.708 --rc genhtml_branch_coverage=1 00:05:10.708 --rc genhtml_function_coverage=1 00:05:10.708 --rc genhtml_legend=1 00:05:10.708 --rc geninfo_all_blocks=1 00:05:10.708 --rc geninfo_unexecuted_blocks=1 00:05:10.708 00:05:10.708 ' 00:05:10.708 06:11:02 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:10.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:10.708 --rc genhtml_branch_coverage=1 00:05:10.708 --rc genhtml_function_coverage=1 00:05:10.708 --rc genhtml_legend=1 00:05:10.708 --rc geninfo_all_blocks=1 00:05:10.708 --rc geninfo_unexecuted_blocks=1 00:05:10.708 00:05:10.708 ' 00:05:10.708 06:11:02 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:10.708 06:11:02 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:10.708 06:11:02 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:10.708 06:11:02 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:10.708 06:11:02 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:10.708 06:11:02 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:10.708 06:11:02 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.708 06:11:02 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.709 06:11:02 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.709 06:11:02 json_config -- paths/export.sh@5 -- # export PATH 00:05:10.709 06:11:02 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@51 -- # : 0 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:10.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:10.709 06:11:02 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:10.709 INFO: JSON configuration test init 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.709 06:11:02 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:10.709 06:11:02 json_config -- json_config/common.sh@9 -- # local app=target 00:05:10.709 06:11:02 json_config -- json_config/common.sh@10 -- # shift 00:05:10.709 06:11:02 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:10.709 06:11:02 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:10.709 06:11:02 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:10.709 06:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.709 06:11:02 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:10.709 06:11:02 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775372 00:05:10.709 06:11:02 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:10.709 Waiting for target to run... 00:05:10.709 06:11:02 json_config -- json_config/common.sh@25 -- # waitforlisten 775372 /var/tmp/spdk_tgt.sock 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@835 -- # '[' -z 775372 ']' 00:05:10.709 06:11:02 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:10.709 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:10.709 06:11:02 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.709 [2024-12-13 06:11:02.250589] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:10.709 [2024-12-13 06:11:02.250636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775372 ] 00:05:11.277 [2024-12-13 06:11:02.699051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.277 [2024-12-13 06:11:02.718207] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:11.535 06:11:03 json_config -- json_config/common.sh@26 -- # echo '' 00:05:11.535 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:11.535 06:11:03 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:11.535 06:11:03 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:11.535 06:11:03 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:14.823 06:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@54 -- # sort 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:14.823 06:11:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:14.823 06:11:06 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:14.823 06:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:15.082 MallocForNvmf0 00:05:15.082 06:11:06 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.082 06:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:15.339 MallocForNvmf1 00:05:15.339 06:11:06 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.339 06:11:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:15.339 [2024-12-13 06:11:06.983353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:15.597 06:11:07 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.597 06:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:15.597 06:11:07 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.597 06:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:15.856 06:11:07 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:15.856 06:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:16.114 06:11:07 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.114 06:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:16.373 [2024-12-13 06:11:07.773833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:16.373 06:11:07 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:16.373 06:11:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.373 06:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.373 06:11:07 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:16.373 06:11:07 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.373 06:11:07 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.373 06:11:07 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:16.373 06:11:07 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.373 06:11:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:16.632 MallocBdevForConfigChangeCheck 00:05:16.632 06:11:08 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:16.632 06:11:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:16.632 06:11:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:16.632 06:11:08 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:16.632 06:11:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:16.890 06:11:08 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:16.890 INFO: shutting down applications... 00:05:16.890 06:11:08 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:16.890 06:11:08 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:16.890 06:11:08 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:16.890 06:11:08 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:18.794 Calling clear_iscsi_subsystem 00:05:18.794 Calling clear_nvmf_subsystem 00:05:18.794 Calling clear_nbd_subsystem 00:05:18.794 Calling clear_ublk_subsystem 00:05:18.794 Calling clear_vhost_blk_subsystem 00:05:18.794 Calling clear_vhost_scsi_subsystem 00:05:18.794 Calling clear_bdev_subsystem 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@352 -- # break 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:18.794 06:11:10 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:18.794 06:11:10 json_config -- json_config/common.sh@31 -- # local app=target 00:05:18.794 06:11:10 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:18.794 06:11:10 json_config -- json_config/common.sh@35 -- # [[ -n 775372 ]] 00:05:18.794 06:11:10 json_config -- json_config/common.sh@38 -- # kill -SIGINT 775372 00:05:18.794 06:11:10 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:18.794 06:11:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:18.794 06:11:10 json_config -- json_config/common.sh@41 -- # kill -0 775372 00:05:18.794 06:11:10 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:19.362 06:11:10 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:19.362 06:11:10 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:19.362 06:11:10 json_config -- json_config/common.sh@41 -- # kill -0 775372 00:05:19.362 06:11:10 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:19.362 06:11:10 json_config -- json_config/common.sh@43 -- # break 00:05:19.362 06:11:10 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:19.362 06:11:10 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:19.362 SPDK target shutdown done 00:05:19.362 06:11:10 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:19.362 INFO: relaunching applications... 00:05:19.362 06:11:10 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.362 06:11:10 json_config -- json_config/common.sh@9 -- # local app=target 00:05:19.362 06:11:10 json_config -- json_config/common.sh@10 -- # shift 00:05:19.362 06:11:10 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:19.362 06:11:10 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:19.362 06:11:10 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:19.362 06:11:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.363 06:11:10 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:19.363 06:11:10 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=776858 00:05:19.363 06:11:10 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:19.363 Waiting for target to run... 00:05:19.363 06:11:10 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:19.363 06:11:10 json_config -- json_config/common.sh@25 -- # waitforlisten 776858 /var/tmp/spdk_tgt.sock 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@835 -- # '[' -z 776858 ']' 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:19.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.363 06:11:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:19.363 [2024-12-13 06:11:10.980026] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:19.363 [2024-12-13 06:11:10.980083] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776858 ] 00:05:19.930 [2024-12-13 06:11:11.443551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.930 [2024-12-13 06:11:11.464669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.216 [2024-12-13 06:11:14.475326] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:23.216 [2024-12-13 06:11:14.507597] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:23.784 06:11:15 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.784 06:11:15 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:23.784 06:11:15 json_config -- json_config/common.sh@26 -- # echo '' 00:05:23.784 00:05:23.784 06:11:15 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:23.784 06:11:15 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:23.784 INFO: Checking if target configuration is the same... 00:05:23.784 06:11:15 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:23.784 06:11:15 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.784 06:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:23.784 + '[' 2 -ne 2 ']' 00:05:23.784 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:23.784 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:23.784 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:23.784 +++ basename /dev/fd/62 00:05:23.784 ++ mktemp /tmp/62.XXX 00:05:23.784 + tmp_file_1=/tmp/62.GfU 00:05:23.784 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:23.784 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:23.784 + tmp_file_2=/tmp/spdk_tgt_config.json.unP 00:05:23.784 + ret=0 00:05:23.784 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.046 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.046 + diff -u /tmp/62.GfU /tmp/spdk_tgt_config.json.unP 00:05:24.046 + echo 'INFO: JSON config files are the same' 00:05:24.046 INFO: JSON config files are the same 00:05:24.046 + rm /tmp/62.GfU /tmp/spdk_tgt_config.json.unP 00:05:24.046 + exit 0 00:05:24.046 06:11:15 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:24.046 06:11:15 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:24.046 INFO: changing configuration and checking if this can be detected... 00:05:24.046 06:11:15 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.046 06:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:24.304 06:11:15 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.305 06:11:15 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:24.305 06:11:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:24.305 + '[' 2 -ne 2 ']' 00:05:24.305 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:24.305 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:24.305 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:24.305 +++ basename /dev/fd/62 00:05:24.305 ++ mktemp /tmp/62.XXX 00:05:24.305 + tmp_file_1=/tmp/62.zcI 00:05:24.305 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:24.305 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:24.305 + tmp_file_2=/tmp/spdk_tgt_config.json.ozZ 00:05:24.305 + ret=0 00:05:24.305 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.563 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:24.563 + diff -u /tmp/62.zcI /tmp/spdk_tgt_config.json.ozZ 00:05:24.563 + ret=1 00:05:24.563 + echo '=== Start of file: /tmp/62.zcI ===' 00:05:24.563 + cat /tmp/62.zcI 00:05:24.821 + echo '=== End of file: /tmp/62.zcI ===' 00:05:24.821 + echo '' 00:05:24.821 + echo '=== Start of file: /tmp/spdk_tgt_config.json.ozZ ===' 00:05:24.821 + cat /tmp/spdk_tgt_config.json.ozZ 00:05:24.821 + echo '=== End of file: /tmp/spdk_tgt_config.json.ozZ ===' 00:05:24.821 + echo '' 00:05:24.821 + rm /tmp/62.zcI /tmp/spdk_tgt_config.json.ozZ 00:05:24.821 + exit 1 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:24.821 INFO: configuration change detected. 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@324 -- # [[ -n 776858 ]] 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.821 06:11:16 json_config -- json_config/json_config.sh@330 -- # killprocess 776858 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@954 -- # '[' -z 776858 ']' 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@958 -- # kill -0 776858 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@959 -- # uname 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 776858 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 776858' 00:05:24.821 killing process with pid 776858 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@973 -- # kill 776858 00:05:24.821 06:11:16 json_config -- common/autotest_common.sh@978 -- # wait 776858 00:05:26.196 06:11:17 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:26.196 06:11:17 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:26.196 06:11:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:26.196 06:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.196 06:11:17 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:26.196 06:11:17 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:26.196 INFO: Success 00:05:26.196 00:05:26.196 real 0m15.848s 00:05:26.196 user 0m16.965s 00:05:26.196 sys 0m2.113s 00:05:26.196 06:11:17 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.196 06:11:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:26.196 ************************************ 00:05:26.196 END TEST json_config 00:05:26.196 ************************************ 00:05:26.455 06:11:17 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.455 06:11:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.455 06:11:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.455 06:11:17 -- common/autotest_common.sh@10 -- # set +x 00:05:26.455 ************************************ 00:05:26.455 START TEST json_config_extra_key 00:05:26.455 ************************************ 00:05:26.455 06:11:17 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:26.455 06:11:17 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:26.455 06:11:17 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:26.455 06:11:17 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:26.455 06:11:18 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:26.455 06:11:18 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:26.455 06:11:18 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:26.456 06:11:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.456 --rc genhtml_branch_coverage=1 00:05:26.456 --rc genhtml_function_coverage=1 00:05:26.456 --rc genhtml_legend=1 00:05:26.456 --rc geninfo_all_blocks=1 00:05:26.456 --rc geninfo_unexecuted_blocks=1 00:05:26.456 00:05:26.456 ' 00:05:26.456 06:11:18 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.456 --rc genhtml_branch_coverage=1 00:05:26.456 --rc genhtml_function_coverage=1 00:05:26.456 --rc genhtml_legend=1 00:05:26.456 --rc geninfo_all_blocks=1 00:05:26.456 --rc geninfo_unexecuted_blocks=1 00:05:26.456 00:05:26.456 ' 00:05:26.456 06:11:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.456 --rc genhtml_branch_coverage=1 00:05:26.456 --rc genhtml_function_coverage=1 00:05:26.456 --rc genhtml_legend=1 00:05:26.456 --rc geninfo_all_blocks=1 00:05:26.456 --rc geninfo_unexecuted_blocks=1 00:05:26.456 00:05:26.456 ' 00:05:26.456 06:11:18 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:26.456 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:26.456 --rc genhtml_branch_coverage=1 00:05:26.456 --rc genhtml_function_coverage=1 00:05:26.456 --rc genhtml_legend=1 00:05:26.456 --rc geninfo_all_blocks=1 00:05:26.456 --rc geninfo_unexecuted_blocks=1 00:05:26.456 00:05:26.456 ' 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:26.456 06:11:18 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:26.456 06:11:18 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:26.456 06:11:18 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:26.456 06:11:18 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:26.456 06:11:18 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.456 06:11:18 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.456 06:11:18 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.456 06:11:18 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:26.456 06:11:18 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:26.456 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:26.456 06:11:18 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:26.456 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:26.715 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:26.715 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:26.715 INFO: launching applications... 00:05:26.715 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=778205 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:26.715 Waiting for target to run... 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 778205 /var/tmp/spdk_tgt.sock 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 778205 ']' 00:05:26.715 06:11:18 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:26.715 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.715 06:11:18 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:26.715 [2024-12-13 06:11:18.164071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:26.715 [2024-12-13 06:11:18.164122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778205 ] 00:05:26.974 [2024-12-13 06:11:18.451775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.974 [2024-12-13 06:11:18.464713] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.540 06:11:18 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.540 06:11:18 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:27.540 00:05:27.540 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:27.540 INFO: shutting down applications... 00:05:27.540 06:11:18 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 778205 ]] 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 778205 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778205 00:05:27.540 06:11:18 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 778205 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:28.108 06:11:19 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:28.108 SPDK target shutdown done 00:05:28.108 06:11:19 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:28.108 Success 00:05:28.108 00:05:28.108 real 0m1.572s 00:05:28.108 user 0m1.339s 00:05:28.108 sys 0m0.401s 00:05:28.108 06:11:19 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.108 06:11:19 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.108 ************************************ 00:05:28.108 END TEST json_config_extra_key 00:05:28.108 ************************************ 00:05:28.108 06:11:19 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.108 06:11:19 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.108 06:11:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.108 06:11:19 -- common/autotest_common.sh@10 -- # set +x 00:05:28.108 ************************************ 00:05:28.108 START TEST alias_rpc 00:05:28.108 ************************************ 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:28.108 * Looking for test storage... 00:05:28.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.108 06:11:19 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:28.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.108 --rc genhtml_branch_coverage=1 00:05:28.108 --rc genhtml_function_coverage=1 00:05:28.108 --rc genhtml_legend=1 00:05:28.108 --rc geninfo_all_blocks=1 00:05:28.108 --rc geninfo_unexecuted_blocks=1 00:05:28.108 00:05:28.108 ' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:28.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.108 --rc genhtml_branch_coverage=1 00:05:28.108 --rc genhtml_function_coverage=1 00:05:28.108 --rc genhtml_legend=1 00:05:28.108 --rc geninfo_all_blocks=1 00:05:28.108 --rc geninfo_unexecuted_blocks=1 00:05:28.108 00:05:28.108 ' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:28.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.108 --rc genhtml_branch_coverage=1 00:05:28.108 --rc genhtml_function_coverage=1 00:05:28.108 --rc genhtml_legend=1 00:05:28.108 --rc geninfo_all_blocks=1 00:05:28.108 --rc geninfo_unexecuted_blocks=1 00:05:28.108 00:05:28.108 ' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:28.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.108 --rc genhtml_branch_coverage=1 00:05:28.108 --rc genhtml_function_coverage=1 00:05:28.108 --rc genhtml_legend=1 00:05:28.108 --rc geninfo_all_blocks=1 00:05:28.108 --rc geninfo_unexecuted_blocks=1 00:05:28.108 00:05:28.108 ' 00:05:28.108 06:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:28.108 06:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:28.108 06:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=778596 00:05:28.108 06:11:19 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 778596 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 778596 ']' 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.108 06:11:19 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.367 [2024-12-13 06:11:19.794330] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:28.367 [2024-12-13 06:11:19.794375] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778596 ] 00:05:28.367 [2024-12-13 06:11:19.868562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.367 [2024-12-13 06:11:19.891461] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.625 06:11:20 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.625 06:11:20 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:28.625 06:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:28.884 06:11:20 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 778596 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 778596 ']' 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 778596 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778596 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778596' 00:05:28.884 killing process with pid 778596 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@973 -- # kill 778596 00:05:28.884 06:11:20 alias_rpc -- common/autotest_common.sh@978 -- # wait 778596 00:05:29.143 00:05:29.143 real 0m1.081s 00:05:29.143 user 0m1.085s 00:05:29.143 sys 0m0.420s 00:05:29.143 06:11:20 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:29.143 06:11:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:29.143 ************************************ 00:05:29.143 END TEST alias_rpc 00:05:29.143 ************************************ 00:05:29.143 06:11:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:29.143 06:11:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.143 06:11:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:29.143 06:11:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:29.143 06:11:20 -- common/autotest_common.sh@10 -- # set +x 00:05:29.143 ************************************ 00:05:29.143 START TEST spdkcli_tcp 00:05:29.143 ************************************ 00:05:29.143 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:29.402 * Looking for test storage... 00:05:29.402 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:29.402 06:11:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:29.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.402 --rc genhtml_branch_coverage=1 00:05:29.402 --rc genhtml_function_coverage=1 00:05:29.402 --rc genhtml_legend=1 00:05:29.402 --rc geninfo_all_blocks=1 00:05:29.402 --rc geninfo_unexecuted_blocks=1 00:05:29.402 00:05:29.402 ' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:29.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.402 --rc genhtml_branch_coverage=1 00:05:29.402 --rc genhtml_function_coverage=1 00:05:29.402 --rc genhtml_legend=1 00:05:29.402 --rc geninfo_all_blocks=1 00:05:29.402 --rc geninfo_unexecuted_blocks=1 00:05:29.402 00:05:29.402 ' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:29.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.402 --rc genhtml_branch_coverage=1 00:05:29.402 --rc genhtml_function_coverage=1 00:05:29.402 --rc genhtml_legend=1 00:05:29.402 --rc geninfo_all_blocks=1 00:05:29.402 --rc geninfo_unexecuted_blocks=1 00:05:29.402 00:05:29.402 ' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:29.402 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:29.402 --rc genhtml_branch_coverage=1 00:05:29.402 --rc genhtml_function_coverage=1 00:05:29.402 --rc genhtml_legend=1 00:05:29.402 --rc geninfo_all_blocks=1 00:05:29.402 --rc geninfo_unexecuted_blocks=1 00:05:29.402 00:05:29.402 ' 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=778792 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:29.402 06:11:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 778792 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 778792 ']' 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.402 06:11:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.402 [2024-12-13 06:11:20.953322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:29.402 [2024-12-13 06:11:20.953373] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778792 ] 00:05:29.402 [2024-12-13 06:11:21.030551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.402 [2024-12-13 06:11:21.054187] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.402 [2024-12-13 06:11:21.054189] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.661 06:11:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.661 06:11:21 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:29.661 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=778889 00:05:29.661 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:29.661 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:29.920 [ 00:05:29.920 "bdev_malloc_delete", 00:05:29.920 "bdev_malloc_create", 00:05:29.920 "bdev_null_resize", 00:05:29.920 "bdev_null_delete", 00:05:29.920 "bdev_null_create", 00:05:29.920 "bdev_nvme_cuse_unregister", 00:05:29.920 "bdev_nvme_cuse_register", 00:05:29.920 "bdev_opal_new_user", 00:05:29.920 "bdev_opal_set_lock_state", 00:05:29.920 "bdev_opal_delete", 00:05:29.920 "bdev_opal_get_info", 00:05:29.920 "bdev_opal_create", 00:05:29.920 "bdev_nvme_opal_revert", 00:05:29.920 "bdev_nvme_opal_init", 00:05:29.920 "bdev_nvme_send_cmd", 00:05:29.920 "bdev_nvme_set_keys", 00:05:29.920 "bdev_nvme_get_path_iostat", 00:05:29.920 "bdev_nvme_get_mdns_discovery_info", 00:05:29.920 "bdev_nvme_stop_mdns_discovery", 00:05:29.920 "bdev_nvme_start_mdns_discovery", 00:05:29.920 "bdev_nvme_set_multipath_policy", 00:05:29.920 "bdev_nvme_set_preferred_path", 00:05:29.920 "bdev_nvme_get_io_paths", 00:05:29.920 "bdev_nvme_remove_error_injection", 00:05:29.920 "bdev_nvme_add_error_injection", 00:05:29.920 "bdev_nvme_get_discovery_info", 00:05:29.920 "bdev_nvme_stop_discovery", 00:05:29.920 "bdev_nvme_start_discovery", 00:05:29.920 "bdev_nvme_get_controller_health_info", 00:05:29.920 "bdev_nvme_disable_controller", 00:05:29.920 "bdev_nvme_enable_controller", 00:05:29.920 "bdev_nvme_reset_controller", 00:05:29.920 "bdev_nvme_get_transport_statistics", 00:05:29.920 "bdev_nvme_apply_firmware", 00:05:29.920 "bdev_nvme_detach_controller", 00:05:29.920 "bdev_nvme_get_controllers", 00:05:29.920 "bdev_nvme_attach_controller", 00:05:29.920 "bdev_nvme_set_hotplug", 00:05:29.920 "bdev_nvme_set_options", 00:05:29.920 "bdev_passthru_delete", 00:05:29.920 "bdev_passthru_create", 00:05:29.920 "bdev_lvol_set_parent_bdev", 00:05:29.920 "bdev_lvol_set_parent", 00:05:29.920 "bdev_lvol_check_shallow_copy", 00:05:29.920 "bdev_lvol_start_shallow_copy", 00:05:29.920 "bdev_lvol_grow_lvstore", 00:05:29.920 "bdev_lvol_get_lvols", 00:05:29.920 "bdev_lvol_get_lvstores", 00:05:29.920 "bdev_lvol_delete", 00:05:29.920 "bdev_lvol_set_read_only", 00:05:29.920 "bdev_lvol_resize", 00:05:29.920 "bdev_lvol_decouple_parent", 00:05:29.920 "bdev_lvol_inflate", 00:05:29.920 "bdev_lvol_rename", 00:05:29.920 "bdev_lvol_clone_bdev", 00:05:29.920 "bdev_lvol_clone", 00:05:29.920 "bdev_lvol_snapshot", 00:05:29.920 "bdev_lvol_create", 00:05:29.920 "bdev_lvol_delete_lvstore", 00:05:29.920 "bdev_lvol_rename_lvstore", 00:05:29.920 "bdev_lvol_create_lvstore", 00:05:29.920 "bdev_raid_set_options", 00:05:29.921 "bdev_raid_remove_base_bdev", 00:05:29.921 "bdev_raid_add_base_bdev", 00:05:29.921 "bdev_raid_delete", 00:05:29.921 "bdev_raid_create", 00:05:29.921 "bdev_raid_get_bdevs", 00:05:29.921 "bdev_error_inject_error", 00:05:29.921 "bdev_error_delete", 00:05:29.921 "bdev_error_create", 00:05:29.921 "bdev_split_delete", 00:05:29.921 "bdev_split_create", 00:05:29.921 "bdev_delay_delete", 00:05:29.921 "bdev_delay_create", 00:05:29.921 "bdev_delay_update_latency", 00:05:29.921 "bdev_zone_block_delete", 00:05:29.921 "bdev_zone_block_create", 00:05:29.921 "blobfs_create", 00:05:29.921 "blobfs_detect", 00:05:29.921 "blobfs_set_cache_size", 00:05:29.921 "bdev_aio_delete", 00:05:29.921 "bdev_aio_rescan", 00:05:29.921 "bdev_aio_create", 00:05:29.921 "bdev_ftl_set_property", 00:05:29.921 "bdev_ftl_get_properties", 00:05:29.921 "bdev_ftl_get_stats", 00:05:29.921 "bdev_ftl_unmap", 00:05:29.921 "bdev_ftl_unload", 00:05:29.921 "bdev_ftl_delete", 00:05:29.921 "bdev_ftl_load", 00:05:29.921 "bdev_ftl_create", 00:05:29.921 "bdev_virtio_attach_controller", 00:05:29.921 "bdev_virtio_scsi_get_devices", 00:05:29.921 "bdev_virtio_detach_controller", 00:05:29.921 "bdev_virtio_blk_set_hotplug", 00:05:29.921 "bdev_iscsi_delete", 00:05:29.921 "bdev_iscsi_create", 00:05:29.921 "bdev_iscsi_set_options", 00:05:29.921 "accel_error_inject_error", 00:05:29.921 "ioat_scan_accel_module", 00:05:29.921 "dsa_scan_accel_module", 00:05:29.921 "iaa_scan_accel_module", 00:05:29.921 "vfu_virtio_create_fs_endpoint", 00:05:29.921 "vfu_virtio_create_scsi_endpoint", 00:05:29.921 "vfu_virtio_scsi_remove_target", 00:05:29.921 "vfu_virtio_scsi_add_target", 00:05:29.921 "vfu_virtio_create_blk_endpoint", 00:05:29.921 "vfu_virtio_delete_endpoint", 00:05:29.921 "keyring_file_remove_key", 00:05:29.921 "keyring_file_add_key", 00:05:29.921 "keyring_linux_set_options", 00:05:29.921 "fsdev_aio_delete", 00:05:29.921 "fsdev_aio_create", 00:05:29.921 "iscsi_get_histogram", 00:05:29.921 "iscsi_enable_histogram", 00:05:29.921 "iscsi_set_options", 00:05:29.921 "iscsi_get_auth_groups", 00:05:29.921 "iscsi_auth_group_remove_secret", 00:05:29.921 "iscsi_auth_group_add_secret", 00:05:29.921 "iscsi_delete_auth_group", 00:05:29.921 "iscsi_create_auth_group", 00:05:29.921 "iscsi_set_discovery_auth", 00:05:29.921 "iscsi_get_options", 00:05:29.921 "iscsi_target_node_request_logout", 00:05:29.921 "iscsi_target_node_set_redirect", 00:05:29.921 "iscsi_target_node_set_auth", 00:05:29.921 "iscsi_target_node_add_lun", 00:05:29.921 "iscsi_get_stats", 00:05:29.921 "iscsi_get_connections", 00:05:29.921 "iscsi_portal_group_set_auth", 00:05:29.921 "iscsi_start_portal_group", 00:05:29.921 "iscsi_delete_portal_group", 00:05:29.921 "iscsi_create_portal_group", 00:05:29.921 "iscsi_get_portal_groups", 00:05:29.921 "iscsi_delete_target_node", 00:05:29.921 "iscsi_target_node_remove_pg_ig_maps", 00:05:29.921 "iscsi_target_node_add_pg_ig_maps", 00:05:29.921 "iscsi_create_target_node", 00:05:29.921 "iscsi_get_target_nodes", 00:05:29.921 "iscsi_delete_initiator_group", 00:05:29.921 "iscsi_initiator_group_remove_initiators", 00:05:29.921 "iscsi_initiator_group_add_initiators", 00:05:29.921 "iscsi_create_initiator_group", 00:05:29.921 "iscsi_get_initiator_groups", 00:05:29.921 "nvmf_set_crdt", 00:05:29.921 "nvmf_set_config", 00:05:29.921 "nvmf_set_max_subsystems", 00:05:29.921 "nvmf_stop_mdns_prr", 00:05:29.921 "nvmf_publish_mdns_prr", 00:05:29.921 "nvmf_subsystem_get_listeners", 00:05:29.921 "nvmf_subsystem_get_qpairs", 00:05:29.921 "nvmf_subsystem_get_controllers", 00:05:29.921 "nvmf_get_stats", 00:05:29.921 "nvmf_get_transports", 00:05:29.921 "nvmf_create_transport", 00:05:29.921 "nvmf_get_targets", 00:05:29.921 "nvmf_delete_target", 00:05:29.921 "nvmf_create_target", 00:05:29.921 "nvmf_subsystem_allow_any_host", 00:05:29.921 "nvmf_subsystem_set_keys", 00:05:29.921 "nvmf_subsystem_remove_host", 00:05:29.921 "nvmf_subsystem_add_host", 00:05:29.921 "nvmf_ns_remove_host", 00:05:29.921 "nvmf_ns_add_host", 00:05:29.921 "nvmf_subsystem_remove_ns", 00:05:29.921 "nvmf_subsystem_set_ns_ana_group", 00:05:29.921 "nvmf_subsystem_add_ns", 00:05:29.921 "nvmf_subsystem_listener_set_ana_state", 00:05:29.921 "nvmf_discovery_get_referrals", 00:05:29.921 "nvmf_discovery_remove_referral", 00:05:29.921 "nvmf_discovery_add_referral", 00:05:29.921 "nvmf_subsystem_remove_listener", 00:05:29.921 "nvmf_subsystem_add_listener", 00:05:29.921 "nvmf_delete_subsystem", 00:05:29.921 "nvmf_create_subsystem", 00:05:29.921 "nvmf_get_subsystems", 00:05:29.921 "env_dpdk_get_mem_stats", 00:05:29.921 "nbd_get_disks", 00:05:29.921 "nbd_stop_disk", 00:05:29.921 "nbd_start_disk", 00:05:29.921 "ublk_recover_disk", 00:05:29.921 "ublk_get_disks", 00:05:29.921 "ublk_stop_disk", 00:05:29.921 "ublk_start_disk", 00:05:29.921 "ublk_destroy_target", 00:05:29.921 "ublk_create_target", 00:05:29.921 "virtio_blk_create_transport", 00:05:29.921 "virtio_blk_get_transports", 00:05:29.921 "vhost_controller_set_coalescing", 00:05:29.921 "vhost_get_controllers", 00:05:29.921 "vhost_delete_controller", 00:05:29.921 "vhost_create_blk_controller", 00:05:29.921 "vhost_scsi_controller_remove_target", 00:05:29.921 "vhost_scsi_controller_add_target", 00:05:29.921 "vhost_start_scsi_controller", 00:05:29.921 "vhost_create_scsi_controller", 00:05:29.921 "thread_set_cpumask", 00:05:29.921 "scheduler_set_options", 00:05:29.921 "framework_get_governor", 00:05:29.921 "framework_get_scheduler", 00:05:29.921 "framework_set_scheduler", 00:05:29.921 "framework_get_reactors", 00:05:29.921 "thread_get_io_channels", 00:05:29.921 "thread_get_pollers", 00:05:29.921 "thread_get_stats", 00:05:29.921 "framework_monitor_context_switch", 00:05:29.921 "spdk_kill_instance", 00:05:29.921 "log_enable_timestamps", 00:05:29.921 "log_get_flags", 00:05:29.921 "log_clear_flag", 00:05:29.921 "log_set_flag", 00:05:29.921 "log_get_level", 00:05:29.921 "log_set_level", 00:05:29.921 "log_get_print_level", 00:05:29.921 "log_set_print_level", 00:05:29.921 "framework_enable_cpumask_locks", 00:05:29.921 "framework_disable_cpumask_locks", 00:05:29.921 "framework_wait_init", 00:05:29.921 "framework_start_init", 00:05:29.921 "scsi_get_devices", 00:05:29.921 "bdev_get_histogram", 00:05:29.921 "bdev_enable_histogram", 00:05:29.921 "bdev_set_qos_limit", 00:05:29.921 "bdev_set_qd_sampling_period", 00:05:29.921 "bdev_get_bdevs", 00:05:29.921 "bdev_reset_iostat", 00:05:29.921 "bdev_get_iostat", 00:05:29.921 "bdev_examine", 00:05:29.921 "bdev_wait_for_examine", 00:05:29.921 "bdev_set_options", 00:05:29.921 "accel_get_stats", 00:05:29.921 "accel_set_options", 00:05:29.921 "accel_set_driver", 00:05:29.921 "accel_crypto_key_destroy", 00:05:29.921 "accel_crypto_keys_get", 00:05:29.921 "accel_crypto_key_create", 00:05:29.921 "accel_assign_opc", 00:05:29.921 "accel_get_module_info", 00:05:29.921 "accel_get_opc_assignments", 00:05:29.921 "vmd_rescan", 00:05:29.921 "vmd_remove_device", 00:05:29.921 "vmd_enable", 00:05:29.921 "sock_get_default_impl", 00:05:29.921 "sock_set_default_impl", 00:05:29.921 "sock_impl_set_options", 00:05:29.921 "sock_impl_get_options", 00:05:29.921 "iobuf_get_stats", 00:05:29.921 "iobuf_set_options", 00:05:29.921 "keyring_get_keys", 00:05:29.921 "vfu_tgt_set_base_path", 00:05:29.921 "framework_get_pci_devices", 00:05:29.921 "framework_get_config", 00:05:29.921 "framework_get_subsystems", 00:05:29.921 "fsdev_set_opts", 00:05:29.921 "fsdev_get_opts", 00:05:29.921 "trace_get_info", 00:05:29.921 "trace_get_tpoint_group_mask", 00:05:29.921 "trace_disable_tpoint_group", 00:05:29.921 "trace_enable_tpoint_group", 00:05:29.921 "trace_clear_tpoint_mask", 00:05:29.921 "trace_set_tpoint_mask", 00:05:29.921 "notify_get_notifications", 00:05:29.921 "notify_get_types", 00:05:29.921 "spdk_get_version", 00:05:29.921 "rpc_get_methods" 00:05:29.921 ] 00:05:29.921 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:29.921 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:29.921 06:11:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 778792 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 778792 ']' 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 778792 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778792 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778792' 00:05:29.921 killing process with pid 778792 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 778792 00:05:29.921 06:11:21 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 778792 00:05:30.180 00:05:30.180 real 0m1.111s 00:05:30.180 user 0m1.866s 00:05:30.180 sys 0m0.460s 00:05:30.180 06:11:21 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.180 06:11:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:30.180 ************************************ 00:05:30.180 END TEST spdkcli_tcp 00:05:30.180 ************************************ 00:05:30.439 06:11:21 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.439 06:11:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.439 06:11:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.439 06:11:21 -- common/autotest_common.sh@10 -- # set +x 00:05:30.439 ************************************ 00:05:30.439 START TEST dpdk_mem_utility 00:05:30.439 ************************************ 00:05:30.439 06:11:21 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:30.439 * Looking for test storage... 00:05:30.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:30.439 06:11:21 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:30.439 06:11:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:30.439 06:11:21 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:30.439 06:11:22 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.439 --rc genhtml_branch_coverage=1 00:05:30.439 --rc genhtml_function_coverage=1 00:05:30.439 --rc genhtml_legend=1 00:05:30.439 --rc geninfo_all_blocks=1 00:05:30.439 --rc geninfo_unexecuted_blocks=1 00:05:30.439 00:05:30.439 ' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.439 --rc genhtml_branch_coverage=1 00:05:30.439 --rc genhtml_function_coverage=1 00:05:30.439 --rc genhtml_legend=1 00:05:30.439 --rc geninfo_all_blocks=1 00:05:30.439 --rc geninfo_unexecuted_blocks=1 00:05:30.439 00:05:30.439 ' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.439 --rc genhtml_branch_coverage=1 00:05:30.439 --rc genhtml_function_coverage=1 00:05:30.439 --rc genhtml_legend=1 00:05:30.439 --rc geninfo_all_blocks=1 00:05:30.439 --rc geninfo_unexecuted_blocks=1 00:05:30.439 00:05:30.439 ' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:30.439 --rc genhtml_branch_coverage=1 00:05:30.439 --rc genhtml_function_coverage=1 00:05:30.439 --rc genhtml_legend=1 00:05:30.439 --rc geninfo_all_blocks=1 00:05:30.439 --rc geninfo_unexecuted_blocks=1 00:05:30.439 00:05:30.439 ' 00:05:30.439 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.439 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=778983 00:05:30.439 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 778983 00:05:30.439 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 778983 ']' 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.439 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.698 [2024-12-13 06:11:22.127950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:30.698 [2024-12-13 06:11:22.127998] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid778983 ] 00:05:30.698 [2024-12-13 06:11:22.203708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.698 [2024-12-13 06:11:22.226353] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.958 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.958 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:30.958 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:30.958 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:30.958 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:30.958 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:30.958 { 00:05:30.958 "filename": "/tmp/spdk_mem_dump.txt" 00:05:30.958 } 00:05:30.958 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:30.958 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:30.958 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:30.958 1 heaps totaling size 818.000000 MiB 00:05:30.958 size: 818.000000 MiB heap id: 0 00:05:30.958 end heaps---------- 00:05:30.958 9 mempools totaling size 603.782043 MiB 00:05:30.958 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:30.958 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:30.958 size: 100.555481 MiB name: bdev_io_778983 00:05:30.958 size: 50.003479 MiB name: msgpool_778983 00:05:30.958 size: 36.509338 MiB name: fsdev_io_778983 00:05:30.958 size: 21.763794 MiB name: PDU_Pool 00:05:30.958 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:30.958 size: 4.133484 MiB name: evtpool_778983 00:05:30.958 size: 0.026123 MiB name: Session_Pool 00:05:30.958 end mempools------- 00:05:30.958 6 memzones totaling size 4.142822 MiB 00:05:30.958 size: 1.000366 MiB name: RG_ring_0_778983 00:05:30.958 size: 1.000366 MiB name: RG_ring_1_778983 00:05:30.958 size: 1.000366 MiB name: RG_ring_4_778983 00:05:30.958 size: 1.000366 MiB name: RG_ring_5_778983 00:05:30.958 size: 0.125366 MiB name: RG_ring_2_778983 00:05:30.958 size: 0.015991 MiB name: RG_ring_3_778983 00:05:30.958 end memzones------- 00:05:30.958 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:30.958 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:30.958 list of free elements. size: 10.852478 MiB 00:05:30.958 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:30.958 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:30.958 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:30.958 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:30.958 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:30.958 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:30.958 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:30.958 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:30.958 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:30.958 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:30.958 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:30.958 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:30.958 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:30.958 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:30.958 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:30.958 list of standard malloc elements. size: 199.218628 MiB 00:05:30.958 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:30.958 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:30.958 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:30.958 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:30.958 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:30.958 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:30.958 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:30.958 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:30.958 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:30.958 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:30.958 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:30.958 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:30.958 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:30.959 list of memzone associated elements. size: 607.928894 MiB 00:05:30.959 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:30.959 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:30.959 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:30.959 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:30.959 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:30.959 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_778983_0 00:05:30.959 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:30.959 associated memzone info: size: 48.002930 MiB name: MP_msgpool_778983_0 00:05:30.959 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:30.959 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_778983_0 00:05:30.959 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:30.959 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:30.959 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:30.959 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:30.959 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:30.959 associated memzone info: size: 3.000122 MiB name: MP_evtpool_778983_0 00:05:30.959 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:30.959 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_778983 00:05:30.959 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:30.959 associated memzone info: size: 1.007996 MiB name: MP_evtpool_778983 00:05:30.959 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:30.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:30.959 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:30.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:30.959 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:30.959 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:30.959 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:30.959 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:30.959 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:30.959 associated memzone info: size: 1.000366 MiB name: RG_ring_0_778983 00:05:30.959 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:30.959 associated memzone info: size: 1.000366 MiB name: RG_ring_1_778983 00:05:30.959 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:30.959 associated memzone info: size: 1.000366 MiB name: RG_ring_4_778983 00:05:30.959 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:30.959 associated memzone info: size: 1.000366 MiB name: RG_ring_5_778983 00:05:30.959 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:30.959 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_778983 00:05:30.959 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:30.959 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_778983 00:05:30.959 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:30.959 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:30.959 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:30.959 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:30.959 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:30.959 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:30.959 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:30.959 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_778983 00:05:30.959 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:30.959 associated memzone info: size: 0.125366 MiB name: RG_ring_2_778983 00:05:30.959 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:30.959 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:30.959 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:30.959 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:30.959 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:30.959 associated memzone info: size: 0.015991 MiB name: RG_ring_3_778983 00:05:30.959 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:30.959 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:30.959 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:30.959 associated memzone info: size: 0.000183 MiB name: MP_msgpool_778983 00:05:30.959 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:30.959 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_778983 00:05:30.959 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:30.959 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_778983 00:05:30.959 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:30.959 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:30.959 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:30.959 06:11:22 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 778983 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 778983 ']' 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 778983 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 778983 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 778983' 00:05:30.959 killing process with pid 778983 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 778983 00:05:30.959 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 778983 00:05:31.526 00:05:31.526 real 0m0.985s 00:05:31.526 user 0m0.922s 00:05:31.526 sys 0m0.410s 00:05:31.526 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.526 06:11:22 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:31.526 ************************************ 00:05:31.526 END TEST dpdk_mem_utility 00:05:31.526 ************************************ 00:05:31.526 06:11:22 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.526 06:11:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.526 06:11:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.526 06:11:22 -- common/autotest_common.sh@10 -- # set +x 00:05:31.526 ************************************ 00:05:31.526 START TEST event 00:05:31.526 ************************************ 00:05:31.526 06:11:22 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:31.526 * Looking for test storage... 00:05:31.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:31.526 06:11:23 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:31.526 06:11:23 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:31.526 06:11:23 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:31.526 06:11:23 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:31.526 06:11:23 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:31.526 06:11:23 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:31.526 06:11:23 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:31.526 06:11:23 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:31.526 06:11:23 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:31.526 06:11:23 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:31.526 06:11:23 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:31.526 06:11:23 event -- scripts/common.sh@344 -- # case "$op" in 00:05:31.526 06:11:23 event -- scripts/common.sh@345 -- # : 1 00:05:31.526 06:11:23 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:31.526 06:11:23 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:31.526 06:11:23 event -- scripts/common.sh@365 -- # decimal 1 00:05:31.526 06:11:23 event -- scripts/common.sh@353 -- # local d=1 00:05:31.526 06:11:23 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:31.526 06:11:23 event -- scripts/common.sh@355 -- # echo 1 00:05:31.526 06:11:23 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:31.526 06:11:23 event -- scripts/common.sh@366 -- # decimal 2 00:05:31.526 06:11:23 event -- scripts/common.sh@353 -- # local d=2 00:05:31.526 06:11:23 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:31.526 06:11:23 event -- scripts/common.sh@355 -- # echo 2 00:05:31.526 06:11:23 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:31.526 06:11:23 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:31.526 06:11:23 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:31.526 06:11:23 event -- scripts/common.sh@368 -- # return 0 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.526 --rc genhtml_branch_coverage=1 00:05:31.526 --rc genhtml_function_coverage=1 00:05:31.526 --rc genhtml_legend=1 00:05:31.526 --rc geninfo_all_blocks=1 00:05:31.526 --rc geninfo_unexecuted_blocks=1 00:05:31.526 00:05:31.526 ' 00:05:31.526 06:11:23 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:31.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.526 --rc genhtml_branch_coverage=1 00:05:31.527 --rc genhtml_function_coverage=1 00:05:31.527 --rc genhtml_legend=1 00:05:31.527 --rc geninfo_all_blocks=1 00:05:31.527 --rc geninfo_unexecuted_blocks=1 00:05:31.527 00:05:31.527 ' 00:05:31.527 06:11:23 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:31.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.527 --rc genhtml_branch_coverage=1 00:05:31.527 --rc genhtml_function_coverage=1 00:05:31.527 --rc genhtml_legend=1 00:05:31.527 --rc geninfo_all_blocks=1 00:05:31.527 --rc geninfo_unexecuted_blocks=1 00:05:31.527 00:05:31.527 ' 00:05:31.527 06:11:23 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:31.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:31.527 --rc genhtml_branch_coverage=1 00:05:31.527 --rc genhtml_function_coverage=1 00:05:31.527 --rc genhtml_legend=1 00:05:31.527 --rc geninfo_all_blocks=1 00:05:31.527 --rc geninfo_unexecuted_blocks=1 00:05:31.527 00:05:31.527 ' 00:05:31.527 06:11:23 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:31.527 06:11:23 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:31.527 06:11:23 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.527 06:11:23 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:31.527 06:11:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.527 06:11:23 event -- common/autotest_common.sh@10 -- # set +x 00:05:31.527 ************************************ 00:05:31.527 START TEST event_perf 00:05:31.527 ************************************ 00:05:31.527 06:11:23 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:31.785 Running I/O for 1 seconds...[2024-12-13 06:11:23.186446] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:31.785 [2024-12-13 06:11:23.186522] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779260 ] 00:05:31.785 [2024-12-13 06:11:23.265688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:31.785 [2024-12-13 06:11:23.291434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:31.785 [2024-12-13 06:11:23.291556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.785 [2024-12-13 06:11:23.291589] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.785 [2024-12-13 06:11:23.291590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:32.721 Running I/O for 1 seconds... 00:05:32.721 lcore 0: 205500 00:05:32.721 lcore 1: 205499 00:05:32.721 lcore 2: 205500 00:05:32.721 lcore 3: 205500 00:05:32.721 done. 00:05:32.721 00:05:32.721 real 0m1.160s 00:05:32.721 user 0m4.077s 00:05:32.721 sys 0m0.080s 00:05:32.721 06:11:24 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.721 06:11:24 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:32.721 ************************************ 00:05:32.721 END TEST event_perf 00:05:32.721 ************************************ 00:05:32.721 06:11:24 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.721 06:11:24 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:32.721 06:11:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.721 06:11:24 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.979 ************************************ 00:05:32.979 START TEST event_reactor 00:05:32.979 ************************************ 00:05:32.979 06:11:24 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:32.979 [2024-12-13 06:11:24.419849] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:32.979 [2024-12-13 06:11:24.419920] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779504 ] 00:05:32.979 [2024-12-13 06:11:24.500477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.979 [2024-12-13 06:11:24.521404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.915 test_start 00:05:33.915 oneshot 00:05:33.915 tick 100 00:05:33.915 tick 100 00:05:33.915 tick 250 00:05:33.915 tick 100 00:05:33.915 tick 100 00:05:33.915 tick 250 00:05:33.915 tick 100 00:05:33.915 tick 500 00:05:33.915 tick 100 00:05:33.915 tick 100 00:05:33.915 tick 250 00:05:33.915 tick 100 00:05:33.915 tick 100 00:05:33.915 test_end 00:05:33.915 00:05:33.915 real 0m1.154s 00:05:33.915 user 0m1.074s 00:05:33.915 sys 0m0.076s 00:05:33.915 06:11:25 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:33.915 06:11:25 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:33.915 ************************************ 00:05:33.915 END TEST event_reactor 00:05:33.915 ************************************ 00:05:34.174 06:11:25 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.174 06:11:25 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:34.174 06:11:25 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.174 06:11:25 event -- common/autotest_common.sh@10 -- # set +x 00:05:34.174 ************************************ 00:05:34.174 START TEST event_reactor_perf 00:05:34.174 ************************************ 00:05:34.174 06:11:25 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:34.174 [2024-12-13 06:11:25.644726] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:34.174 [2024-12-13 06:11:25.644794] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779749 ] 00:05:34.174 [2024-12-13 06:11:25.723097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.174 [2024-12-13 06:11:25.744944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.550 test_start 00:05:35.550 test_end 00:05:35.550 Performance: 517765 events per second 00:05:35.550 00:05:35.550 real 0m1.152s 00:05:35.550 user 0m1.069s 00:05:35.550 sys 0m0.079s 00:05:35.550 06:11:26 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.550 06:11:26 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.550 ************************************ 00:05:35.550 END TEST event_reactor_perf 00:05:35.550 ************************************ 00:05:35.550 06:11:26 event -- event/event.sh@49 -- # uname -s 00:05:35.550 06:11:26 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:35.550 06:11:26 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.550 06:11:26 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.550 06:11:26 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.550 06:11:26 event -- common/autotest_common.sh@10 -- # set +x 00:05:35.550 ************************************ 00:05:35.550 START TEST event_scheduler 00:05:35.550 ************************************ 00:05:35.550 06:11:26 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:35.550 * Looking for test storage... 00:05:35.550 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:35.550 06:11:26 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:35.550 06:11:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:35.550 06:11:26 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:35.550 06:11:27 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.551 06:11:27 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:35.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.551 --rc genhtml_branch_coverage=1 00:05:35.551 --rc genhtml_function_coverage=1 00:05:35.551 --rc genhtml_legend=1 00:05:35.551 --rc geninfo_all_blocks=1 00:05:35.551 --rc geninfo_unexecuted_blocks=1 00:05:35.551 00:05:35.551 ' 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:35.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.551 --rc genhtml_branch_coverage=1 00:05:35.551 --rc genhtml_function_coverage=1 00:05:35.551 --rc genhtml_legend=1 00:05:35.551 --rc geninfo_all_blocks=1 00:05:35.551 --rc geninfo_unexecuted_blocks=1 00:05:35.551 00:05:35.551 ' 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:35.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.551 --rc genhtml_branch_coverage=1 00:05:35.551 --rc genhtml_function_coverage=1 00:05:35.551 --rc genhtml_legend=1 00:05:35.551 --rc geninfo_all_blocks=1 00:05:35.551 --rc geninfo_unexecuted_blocks=1 00:05:35.551 00:05:35.551 ' 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:35.551 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.551 --rc genhtml_branch_coverage=1 00:05:35.551 --rc genhtml_function_coverage=1 00:05:35.551 --rc genhtml_legend=1 00:05:35.551 --rc geninfo_all_blocks=1 00:05:35.551 --rc geninfo_unexecuted_blocks=1 00:05:35.551 00:05:35.551 ' 00:05:35.551 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:35.551 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=780023 00:05:35.551 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.551 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:35.551 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 780023 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 780023 ']' 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.551 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.551 06:11:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.551 [2024-12-13 06:11:27.073763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:35.551 [2024-12-13 06:11:27.073810] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780023 ] 00:05:35.551 [2024-12-13 06:11:27.150293] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:35.551 [2024-12-13 06:11:27.175490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.551 [2024-12-13 06:11:27.175556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.551 [2024-12-13 06:11:27.175664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.551 [2024-12-13 06:11:27.175665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:35.810 06:11:27 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.810 06:11:27 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:35.810 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:35.810 06:11:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 [2024-12-13 06:11:27.236339] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:35.811 [2024-12-13 06:11:27.236357] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:35.811 [2024-12-13 06:11:27.236366] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:35.811 [2024-12-13 06:11:27.236372] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:35.811 [2024-12-13 06:11:27.236378] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 [2024-12-13 06:11:27.310078] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 ************************************ 00:05:35.811 START TEST scheduler_create_thread 00:05:35.811 ************************************ 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 2 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 3 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 4 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 5 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 6 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 7 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 8 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 9 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 10 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:35.811 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:36.378 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.378 06:11:27 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:36.378 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.378 06:11:27 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:37.753 06:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:37.753 06:11:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:37.753 06:11:29 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:37.753 06:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:37.753 06:11:29 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.128 06:11:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.128 00:05:39.128 real 0m3.102s 00:05:39.128 user 0m0.026s 00:05:39.128 sys 0m0.004s 00:05:39.128 06:11:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.128 06:11:30 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:39.128 ************************************ 00:05:39.128 END TEST scheduler_create_thread 00:05:39.128 ************************************ 00:05:39.128 06:11:30 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:39.128 06:11:30 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 780023 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 780023 ']' 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 780023 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780023 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780023' 00:05:39.128 killing process with pid 780023 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 780023 00:05:39.128 06:11:30 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 780023 00:05:39.386 [2024-12-13 06:11:30.829350] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:39.386 00:05:39.386 real 0m4.160s 00:05:39.386 user 0m6.698s 00:05:39.386 sys 0m0.385s 00:05:39.386 06:11:31 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.386 06:11:31 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:39.386 ************************************ 00:05:39.386 END TEST event_scheduler 00:05:39.386 ************************************ 00:05:39.645 06:11:31 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:39.645 06:11:31 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:39.645 06:11:31 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.645 06:11:31 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.645 06:11:31 event -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 ************************************ 00:05:39.645 START TEST app_repeat 00:05:39.645 ************************************ 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@19 -- # repeat_pid=780747 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 780747' 00:05:39.645 Process app_repeat pid: 780747 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:39.645 spdk_app_start Round 0 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@25 -- # waitforlisten 780747 /var/tmp/spdk-nbd.sock 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 780747 ']' 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:39.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:39.645 [2024-12-13 06:11:31.101984] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:39.645 [2024-12-13 06:11:31.102031] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780747 ] 00:05:39.645 [2024-12-13 06:11:31.175839] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.645 [2024-12-13 06:11:31.199969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.645 [2024-12-13 06:11:31.199973] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.645 06:11:31 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:39.645 06:11:31 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:39.903 Malloc0 00:05:39.903 06:11:31 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:40.161 Malloc1 00:05:40.161 06:11:31 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.161 06:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:40.420 /dev/nbd0 00:05:40.420 06:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:40.420 06:11:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.420 1+0 records in 00:05:40.420 1+0 records out 00:05:40.420 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185194 s, 22.1 MB/s 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.420 06:11:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.420 06:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.420 06:11:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.420 06:11:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:40.679 /dev/nbd1 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:40.679 1+0 records in 00:05:40.679 1+0 records out 00:05:40.679 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226443 s, 18.1 MB/s 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:40.679 06:11:32 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.679 06:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:40.937 { 00:05:40.937 "nbd_device": "/dev/nbd0", 00:05:40.937 "bdev_name": "Malloc0" 00:05:40.937 }, 00:05:40.937 { 00:05:40.937 "nbd_device": "/dev/nbd1", 00:05:40.937 "bdev_name": "Malloc1" 00:05:40.937 } 00:05:40.937 ]' 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:40.937 { 00:05:40.937 "nbd_device": "/dev/nbd0", 00:05:40.937 "bdev_name": "Malloc0" 00:05:40.937 }, 00:05:40.937 { 00:05:40.937 "nbd_device": "/dev/nbd1", 00:05:40.937 "bdev_name": "Malloc1" 00:05:40.937 } 00:05:40.937 ]' 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:40.937 /dev/nbd1' 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:40.937 /dev/nbd1' 00:05:40.937 06:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:40.938 256+0 records in 00:05:40.938 256+0 records out 00:05:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108297 s, 96.8 MB/s 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:40.938 256+0 records in 00:05:40.938 256+0 records out 00:05:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141447 s, 74.1 MB/s 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:40.938 256+0 records in 00:05:40.938 256+0 records out 00:05:40.938 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0148238 s, 70.7 MB/s 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:40.938 06:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:41.196 06:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:41.197 06:11:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:41.456 06:11:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:41.714 06:11:33 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:41.714 06:11:33 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:41.973 06:11:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:41.973 [2024-12-13 06:11:33.549983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:41.973 [2024-12-13 06:11:33.570117] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.973 [2024-12-13 06:11:33.570118] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.973 [2024-12-13 06:11:33.610347] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:41.973 [2024-12-13 06:11:33.610386] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:45.257 06:11:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:45.257 06:11:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:45.257 spdk_app_start Round 1 00:05:45.257 06:11:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 780747 /var/tmp/spdk-nbd.sock 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 780747 ']' 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:45.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:45.257 06:11:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:45.257 06:11:36 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.257 Malloc0 00:05:45.257 06:11:36 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:45.515 Malloc1 00:05:45.515 06:11:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:45.515 06:11:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:45.516 06:11:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:45.516 06:11:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:45.516 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:45.516 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.516 06:11:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:45.774 /dev/nbd0 00:05:45.774 06:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:45.774 06:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:45.774 1+0 records in 00:05:45.774 1+0 records out 00:05:45.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184248 s, 22.2 MB/s 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:45.774 06:11:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:45.774 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:45.774 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:45.774 06:11:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:46.032 /dev/nbd1 00:05:46.032 06:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:46.033 1+0 records in 00:05:46.033 1+0 records out 00:05:46.033 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000209165 s, 19.6 MB/s 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:46.033 06:11:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.033 06:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:46.291 { 00:05:46.291 "nbd_device": "/dev/nbd0", 00:05:46.291 "bdev_name": "Malloc0" 00:05:46.291 }, 00:05:46.291 { 00:05:46.291 "nbd_device": "/dev/nbd1", 00:05:46.291 "bdev_name": "Malloc1" 00:05:46.291 } 00:05:46.291 ]' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:46.291 { 00:05:46.291 "nbd_device": "/dev/nbd0", 00:05:46.291 "bdev_name": "Malloc0" 00:05:46.291 }, 00:05:46.291 { 00:05:46.291 "nbd_device": "/dev/nbd1", 00:05:46.291 "bdev_name": "Malloc1" 00:05:46.291 } 00:05:46.291 ]' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:46.291 /dev/nbd1' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:46.291 /dev/nbd1' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:46.291 256+0 records in 00:05:46.291 256+0 records out 00:05:46.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00997575 s, 105 MB/s 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:46.291 256+0 records in 00:05:46.291 256+0 records out 00:05:46.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145115 s, 72.3 MB/s 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:46.291 256+0 records in 00:05:46.291 256+0 records out 00:05:46.291 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155189 s, 67.6 MB/s 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:46.291 06:11:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.292 06:11:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:46.551 06:11:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:46.809 06:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:47.068 06:11:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:47.068 06:11:38 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:47.326 06:11:38 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:47.326 [2024-12-13 06:11:38.870027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.326 [2024-12-13 06:11:38.890090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.326 [2024-12-13 06:11:38.890090] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.326 [2024-12-13 06:11:38.931058] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:47.326 [2024-12-13 06:11:38.931097] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:50.611 06:11:41 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:50.611 06:11:41 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:50.611 spdk_app_start Round 2 00:05:50.611 06:11:41 event.app_repeat -- event/event.sh@25 -- # waitforlisten 780747 /var/tmp/spdk-nbd.sock 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 780747 ']' 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:50.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:50.611 06:11:41 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:50.611 06:11:41 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.611 Malloc0 00:05:50.611 06:11:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:50.870 Malloc1 00:05:50.870 06:11:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:50.870 06:11:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:51.129 /dev/nbd0 00:05:51.129 06:11:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:51.129 06:11:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.129 1+0 records in 00:05:51.129 1+0 records out 00:05:51.129 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000215309 s, 19.0 MB/s 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.129 06:11:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.129 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.129 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.129 06:11:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:51.386 /dev/nbd1 00:05:51.386 06:11:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:51.386 06:11:42 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:51.386 06:11:42 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:51.387 1+0 records in 00:05:51.387 1+0 records out 00:05:51.387 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204769 s, 20.0 MB/s 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:51.387 06:11:42 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:51.387 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:51.387 06:11:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:51.387 06:11:42 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:51.387 06:11:42 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.387 06:11:42 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:51.644 { 00:05:51.644 "nbd_device": "/dev/nbd0", 00:05:51.644 "bdev_name": "Malloc0" 00:05:51.644 }, 00:05:51.644 { 00:05:51.644 "nbd_device": "/dev/nbd1", 00:05:51.644 "bdev_name": "Malloc1" 00:05:51.644 } 00:05:51.644 ]' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:51.644 { 00:05:51.644 "nbd_device": "/dev/nbd0", 00:05:51.644 "bdev_name": "Malloc0" 00:05:51.644 }, 00:05:51.644 { 00:05:51.644 "nbd_device": "/dev/nbd1", 00:05:51.644 "bdev_name": "Malloc1" 00:05:51.644 } 00:05:51.644 ]' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:51.644 /dev/nbd1' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:51.644 /dev/nbd1' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:51.644 06:11:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:51.645 256+0 records in 00:05:51.645 256+0 records out 00:05:51.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102076 s, 103 MB/s 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:51.645 256+0 records in 00:05:51.645 256+0 records out 00:05:51.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139812 s, 75.0 MB/s 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:51.645 256+0 records in 00:05:51.645 256+0 records out 00:05:51.645 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149263 s, 70.3 MB/s 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.645 06:11:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:51.903 06:11:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:52.162 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:52.420 06:11:43 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:52.420 06:11:43 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:52.679 06:11:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:52.679 [2024-12-13 06:11:44.227830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.679 [2024-12-13 06:11:44.247728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.679 [2024-12-13 06:11:44.247728] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.679 [2024-12-13 06:11:44.288602] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:52.679 [2024-12-13 06:11:44.288640] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:55.962 06:11:47 event.app_repeat -- event/event.sh@38 -- # waitforlisten 780747 /var/tmp/spdk-nbd.sock 00:05:55.962 06:11:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 780747 ']' 00:05:55.962 06:11:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:55.962 06:11:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:55.962 06:11:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:55.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:55.962 06:11:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:55.963 06:11:47 event.app_repeat -- event/event.sh@39 -- # killprocess 780747 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 780747 ']' 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 780747 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 780747 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 780747' 00:05:55.963 killing process with pid 780747 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@973 -- # kill 780747 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@978 -- # wait 780747 00:05:55.963 spdk_app_start is called in Round 0. 00:05:55.963 Shutdown signal received, stop current app iteration 00:05:55.963 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:55.963 spdk_app_start is called in Round 1. 00:05:55.963 Shutdown signal received, stop current app iteration 00:05:55.963 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:55.963 spdk_app_start is called in Round 2. 00:05:55.963 Shutdown signal received, stop current app iteration 00:05:55.963 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 reinitialization... 00:05:55.963 spdk_app_start is called in Round 3. 00:05:55.963 Shutdown signal received, stop current app iteration 00:05:55.963 06:11:47 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:55.963 06:11:47 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:55.963 00:05:55.963 real 0m16.410s 00:05:55.963 user 0m36.183s 00:05:55.963 sys 0m2.560s 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:55.963 06:11:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:55.963 ************************************ 00:05:55.963 END TEST app_repeat 00:05:55.963 ************************************ 00:05:55.963 06:11:47 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:55.963 06:11:47 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:55.963 06:11:47 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:55.963 06:11:47 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:55.963 06:11:47 event -- common/autotest_common.sh@10 -- # set +x 00:05:55.963 ************************************ 00:05:55.963 START TEST cpu_locks 00:05:55.963 ************************************ 00:05:55.963 06:11:47 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:56.222 * Looking for test storage... 00:05:56.222 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:56.222 06:11:47 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:56.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.222 --rc genhtml_branch_coverage=1 00:05:56.222 --rc genhtml_function_coverage=1 00:05:56.222 --rc genhtml_legend=1 00:05:56.222 --rc geninfo_all_blocks=1 00:05:56.222 --rc geninfo_unexecuted_blocks=1 00:05:56.222 00:05:56.222 ' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:56.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.222 --rc genhtml_branch_coverage=1 00:05:56.222 --rc genhtml_function_coverage=1 00:05:56.222 --rc genhtml_legend=1 00:05:56.222 --rc geninfo_all_blocks=1 00:05:56.222 --rc geninfo_unexecuted_blocks=1 00:05:56.222 00:05:56.222 ' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:56.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.222 --rc genhtml_branch_coverage=1 00:05:56.222 --rc genhtml_function_coverage=1 00:05:56.222 --rc genhtml_legend=1 00:05:56.222 --rc geninfo_all_blocks=1 00:05:56.222 --rc geninfo_unexecuted_blocks=1 00:05:56.222 00:05:56.222 ' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:56.222 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:56.222 --rc genhtml_branch_coverage=1 00:05:56.222 --rc genhtml_function_coverage=1 00:05:56.222 --rc genhtml_legend=1 00:05:56.222 --rc geninfo_all_blocks=1 00:05:56.222 --rc geninfo_unexecuted_blocks=1 00:05:56.222 00:05:56.222 ' 00:05:56.222 06:11:47 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:56.222 06:11:47 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:56.222 06:11:47 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:56.222 06:11:47 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:56.222 06:11:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.222 ************************************ 00:05:56.222 START TEST default_locks 00:05:56.222 ************************************ 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=783676 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 783676 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 783676 ']' 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.222 06:11:47 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.222 [2024-12-13 06:11:47.807021] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:56.222 [2024-12-13 06:11:47.807063] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783676 ] 00:05:56.480 [2024-12-13 06:11:47.882355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.480 [2024-12-13 06:11:47.905801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.480 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.480 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:56.480 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 783676 00:05:56.480 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 783676 00:05:56.480 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:57.047 lslocks: write error 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 783676 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 783676 ']' 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 783676 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783676 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783676' 00:05:57.047 killing process with pid 783676 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 783676 00:05:57.047 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 783676 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 783676 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 783676 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 783676 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 783676 ']' 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (783676) - No such process 00:05:57.306 ERROR: process (pid: 783676) is no longer running 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.306 00:05:57.306 real 0m1.147s 00:05:57.306 user 0m1.100s 00:05:57.306 sys 0m0.545s 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.306 06:11:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.306 ************************************ 00:05:57.306 END TEST default_locks 00:05:57.306 ************************************ 00:05:57.306 06:11:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:57.306 06:11:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.306 06:11:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.306 06:11:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:57.565 ************************************ 00:05:57.565 START TEST default_locks_via_rpc 00:05:57.565 ************************************ 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=783925 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 783925 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 783925 ']' 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.565 06:11:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.565 [2024-12-13 06:11:49.026833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:57.565 [2024-12-13 06:11:49.026872] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783925 ] 00:05:57.565 [2024-12-13 06:11:49.101720] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.566 [2024-12-13 06:11:49.124425] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 783925 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 783925 00:05:57.825 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 783925 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 783925 ']' 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 783925 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 783925 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 783925' 00:05:58.392 killing process with pid 783925 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 783925 00:05:58.392 06:11:49 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 783925 00:05:58.650 00:05:58.650 real 0m1.140s 00:05:58.650 user 0m1.086s 00:05:58.650 sys 0m0.533s 00:05:58.650 06:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.650 06:11:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.650 ************************************ 00:05:58.650 END TEST default_locks_via_rpc 00:05:58.650 ************************************ 00:05:58.650 06:11:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:58.650 06:11:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:58.650 06:11:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.650 06:11:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:58.650 ************************************ 00:05:58.650 START TEST non_locking_app_on_locked_coremask 00:05:58.650 ************************************ 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=784175 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 784175 /var/tmp/spdk.sock 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784175 ']' 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.650 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.651 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.651 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.651 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:58.651 [2024-12-13 06:11:50.231736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:58.651 [2024-12-13 06:11:50.231777] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784175 ] 00:05:58.910 [2024-12-13 06:11:50.306668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:58.910 [2024-12-13 06:11:50.329318] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=784197 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 784197 /var/tmp/spdk2.sock 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784197 ']' 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:58.910 06:11:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:59.168 [2024-12-13 06:11:50.577174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:05:59.168 [2024-12-13 06:11:50.577221] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784197 ] 00:05:59.168 [2024-12-13 06:11:50.664132] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:59.168 [2024-12-13 06:11:50.664160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.169 [2024-12-13 06:11:50.711720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.104 06:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:00.104 06:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:00.104 06:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 784175 00:06:00.105 06:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784175 00:06:00.105 06:11:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:00.672 lslocks: write error 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 784175 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784175 ']' 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784175 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784175 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784175' 00:06:00.672 killing process with pid 784175 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784175 00:06:00.672 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784175 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 784197 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784197 ']' 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 784197 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784197 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784197' 00:06:01.240 killing process with pid 784197 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 784197 00:06:01.240 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 784197 00:06:01.499 00:06:01.499 real 0m2.818s 00:06:01.499 user 0m2.964s 00:06:01.499 sys 0m0.970s 00:06:01.499 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:01.499 06:11:52 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.499 ************************************ 00:06:01.499 END TEST non_locking_app_on_locked_coremask 00:06:01.499 ************************************ 00:06:01.499 06:11:53 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:01.499 06:11:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:01.499 06:11:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:01.499 06:11:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:01.499 ************************************ 00:06:01.499 START TEST locking_app_on_unlocked_coremask 00:06:01.499 ************************************ 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=784667 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 784667 /var/tmp/spdk.sock 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784667 ']' 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:01.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:01.499 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:01.499 [2024-12-13 06:11:53.117898] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:01.499 [2024-12-13 06:11:53.117935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784667 ] 00:06:01.758 [2024-12-13 06:11:53.191879] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.758 [2024-12-13 06:11:53.191905] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.758 [2024-12-13 06:11:53.214572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=784802 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 784802 /var/tmp/spdk2.sock 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 784802 ']' 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:02.017 06:11:53 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:02.017 [2024-12-13 06:11:53.467810] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:02.017 [2024-12-13 06:11:53.467858] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784802 ] 00:06:02.017 [2024-12-13 06:11:53.555819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.017 [2024-12-13 06:11:53.603276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.953 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.953 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:02.953 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 784802 00:06:02.953 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 784802 00:06:02.953 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:03.212 lslocks: write error 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 784667 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784667 ']' 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 784667 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784667 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784667' 00:06:03.212 killing process with pid 784667 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 784667 00:06:03.212 06:11:54 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 784667 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 784802 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 784802 ']' 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 784802 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 784802 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 784802' 00:06:03.779 killing process with pid 784802 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 784802 00:06:03.779 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 784802 00:06:04.037 00:06:04.037 real 0m2.611s 00:06:04.037 user 0m2.720s 00:06:04.037 sys 0m0.903s 00:06:04.037 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:04.037 06:11:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.037 ************************************ 00:06:04.037 END TEST locking_app_on_unlocked_coremask 00:06:04.037 ************************************ 00:06:04.296 06:11:55 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:04.296 06:11:55 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:04.296 06:11:55 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:04.296 06:11:55 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:04.296 ************************************ 00:06:04.296 START TEST locking_app_on_locked_coremask 00:06:04.296 ************************************ 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=785147 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 785147 /var/tmp/spdk.sock 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785147 ']' 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:04.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.296 06:11:55 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.296 [2024-12-13 06:11:55.802126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:04.296 [2024-12-13 06:11:55.802171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785147 ] 00:06:04.296 [2024-12-13 06:11:55.879176] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.296 [2024-12-13 06:11:55.899763] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=785281 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 785281 /var/tmp/spdk2.sock 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:04.554 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785281 /var/tmp/spdk2.sock 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785281 /var/tmp/spdk2.sock 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 785281 ']' 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:04.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.555 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:04.555 [2024-12-13 06:11:56.161791] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:04.555 [2024-12-13 06:11:56.161838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785281 ] 00:06:04.813 [2024-12-13 06:11:56.251058] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 785147 has claimed it. 00:06:04.813 [2024-12-13 06:11:56.251099] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:05.379 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785281) - No such process 00:06:05.379 ERROR: process (pid: 785281) is no longer running 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 785147 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 785147 00:06:05.379 06:11:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:05.657 lslocks: write error 00:06:05.944 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 785147 00:06:05.944 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 785147 ']' 00:06:05.944 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 785147 00:06:05.944 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785147 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785147' 00:06:05.945 killing process with pid 785147 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 785147 00:06:05.945 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 785147 00:06:06.234 00:06:06.234 real 0m1.902s 00:06:06.234 user 0m2.028s 00:06:06.234 sys 0m0.678s 00:06:06.234 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:06.234 06:11:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.234 ************************************ 00:06:06.234 END TEST locking_app_on_locked_coremask 00:06:06.234 ************************************ 00:06:06.234 06:11:57 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:06.234 06:11:57 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:06.234 06:11:57 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:06.234 06:11:57 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:06.234 ************************************ 00:06:06.234 START TEST locking_overlapped_coremask 00:06:06.234 ************************************ 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=785622 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 785622 /var/tmp/spdk.sock 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 785622 ']' 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:06.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.234 06:11:57 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.234 [2024-12-13 06:11:57.771659] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.234 [2024-12-13 06:11:57.771699] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785622 ] 00:06:06.234 [2024-12-13 06:11:57.848097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:06.234 [2024-12-13 06:11:57.873499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.234 [2024-12-13 06:11:57.873540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.234 [2024-12-13 06:11:57.873540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=785639 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 785639 /var/tmp/spdk2.sock 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 785639 /var/tmp/spdk2.sock 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 785639 /var/tmp/spdk2.sock 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 785639 ']' 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:06.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:06.564 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:06.564 [2024-12-13 06:11:58.128238] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:06.564 [2024-12-13 06:11:58.128284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785639 ] 00:06:06.823 [2024-12-13 06:11:58.221675] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 785622 has claimed it. 00:06:06.823 [2024-12-13 06:11:58.221713] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:07.390 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (785639) - No such process 00:06:07.390 ERROR: process (pid: 785639) is no longer running 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 785622 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 785622 ']' 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 785622 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785622 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:07.390 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:07.391 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785622' 00:06:07.391 killing process with pid 785622 00:06:07.391 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 785622 00:06:07.391 06:11:58 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 785622 00:06:07.650 00:06:07.650 real 0m1.386s 00:06:07.650 user 0m3.834s 00:06:07.650 sys 0m0.388s 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:07.650 ************************************ 00:06:07.650 END TEST locking_overlapped_coremask 00:06:07.650 ************************************ 00:06:07.650 06:11:59 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:07.650 06:11:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:07.650 06:11:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.650 06:11:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:07.650 ************************************ 00:06:07.650 START TEST locking_overlapped_coremask_via_rpc 00:06:07.650 ************************************ 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=785895 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 785895 /var/tmp/spdk.sock 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785895 ']' 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.650 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.650 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:07.650 [2024-12-13 06:11:59.224967] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:07.650 [2024-12-13 06:11:59.225009] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785895 ] 00:06:07.650 [2024-12-13 06:11:59.301654] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:07.650 [2024-12-13 06:11:59.301680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:07.909 [2024-12-13 06:11:59.326738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.909 [2024-12-13 06:11:59.326842] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.909 [2024-12-13 06:11:59.326844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=785900 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 785900 /var/tmp/spdk2.sock 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785900 ']' 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:07.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:07.909 06:11:59 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.168 [2024-12-13 06:11:59.569905] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:08.168 [2024-12-13 06:11:59.569949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785900 ] 00:06:08.168 [2024-12-13 06:11:59.661300] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:08.168 [2024-12-13 06:11:59.661327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:08.168 [2024-12-13 06:11:59.709712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:08.168 [2024-12-13 06:11:59.709821] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:08.168 [2024-12-13 06:11:59.709822] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 [2024-12-13 06:12:00.424525] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 785895 has claimed it. 00:06:09.104 request: 00:06:09.104 { 00:06:09.104 "method": "framework_enable_cpumask_locks", 00:06:09.104 "req_id": 1 00:06:09.104 } 00:06:09.104 Got JSON-RPC error response 00:06:09.104 response: 00:06:09.104 { 00:06:09.104 "code": -32603, 00:06:09.104 "message": "Failed to claim CPU core: 2" 00:06:09.104 } 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 785895 /var/tmp/spdk.sock 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785895 ']' 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 785900 /var/tmp/spdk2.sock 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 785900 ']' 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:09.104 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:09.363 00:06:09.363 real 0m1.668s 00:06:09.363 user 0m0.812s 00:06:09.363 sys 0m0.136s 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:09.363 06:12:00 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.363 ************************************ 00:06:09.363 END TEST locking_overlapped_coremask_via_rpc 00:06:09.363 ************************************ 00:06:09.363 06:12:00 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:09.363 06:12:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 785895 ]] 00:06:09.363 06:12:00 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 785895 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785895 ']' 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785895 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785895 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785895' 00:06:09.363 killing process with pid 785895 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 785895 00:06:09.363 06:12:00 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 785895 00:06:09.622 06:12:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 785900 ]] 00:06:09.622 06:12:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 785900 00:06:09.622 06:12:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785900 ']' 00:06:09.622 06:12:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785900 00:06:09.622 06:12:01 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:09.622 06:12:01 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:09.622 06:12:01 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 785900 00:06:09.881 06:12:01 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:09.881 06:12:01 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:09.881 06:12:01 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 785900' 00:06:09.881 killing process with pid 785900 00:06:09.881 06:12:01 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 785900 00:06:09.881 06:12:01 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 785900 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 785895 ]] 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 785895 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785895 ']' 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785895 00:06:10.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (785895) - No such process 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 785895 is not found' 00:06:10.139 Process with pid 785895 is not found 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 785900 ]] 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 785900 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 785900 ']' 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 785900 00:06:10.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (785900) - No such process 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 785900 is not found' 00:06:10.139 Process with pid 785900 is not found 00:06:10.139 06:12:01 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:10.139 00:06:10.139 real 0m14.035s 00:06:10.139 user 0m24.204s 00:06:10.139 sys 0m5.117s 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.139 06:12:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.139 ************************************ 00:06:10.139 END TEST cpu_locks 00:06:10.139 ************************************ 00:06:10.139 00:06:10.139 real 0m38.665s 00:06:10.139 user 1m13.557s 00:06:10.139 sys 0m8.680s 00:06:10.139 06:12:01 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.139 06:12:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.139 ************************************ 00:06:10.139 END TEST event 00:06:10.139 ************************************ 00:06:10.139 06:12:01 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.139 06:12:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.139 06:12:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.139 06:12:01 -- common/autotest_common.sh@10 -- # set +x 00:06:10.139 ************************************ 00:06:10.139 START TEST thread 00:06:10.139 ************************************ 00:06:10.139 06:12:01 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:10.139 * Looking for test storage... 00:06:10.139 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:10.139 06:12:01 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.139 06:12:01 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.139 06:12:01 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.398 06:12:01 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.398 06:12:01 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.398 06:12:01 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.398 06:12:01 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.398 06:12:01 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.398 06:12:01 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.398 06:12:01 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.398 06:12:01 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.398 06:12:01 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.398 06:12:01 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.398 06:12:01 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.398 06:12:01 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.398 06:12:01 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:10.398 06:12:01 thread -- scripts/common.sh@345 -- # : 1 00:06:10.398 06:12:01 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.398 06:12:01 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.398 06:12:01 thread -- scripts/common.sh@365 -- # decimal 1 00:06:10.398 06:12:01 thread -- scripts/common.sh@353 -- # local d=1 00:06:10.398 06:12:01 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.398 06:12:01 thread -- scripts/common.sh@355 -- # echo 1 00:06:10.398 06:12:01 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.398 06:12:01 thread -- scripts/common.sh@366 -- # decimal 2 00:06:10.398 06:12:01 thread -- scripts/common.sh@353 -- # local d=2 00:06:10.398 06:12:01 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.398 06:12:01 thread -- scripts/common.sh@355 -- # echo 2 00:06:10.398 06:12:01 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.398 06:12:01 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.398 06:12:01 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.398 06:12:01 thread -- scripts/common.sh@368 -- # return 0 00:06:10.398 06:12:01 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.399 --rc genhtml_branch_coverage=1 00:06:10.399 --rc genhtml_function_coverage=1 00:06:10.399 --rc genhtml_legend=1 00:06:10.399 --rc geninfo_all_blocks=1 00:06:10.399 --rc geninfo_unexecuted_blocks=1 00:06:10.399 00:06:10.399 ' 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.399 --rc genhtml_branch_coverage=1 00:06:10.399 --rc genhtml_function_coverage=1 00:06:10.399 --rc genhtml_legend=1 00:06:10.399 --rc geninfo_all_blocks=1 00:06:10.399 --rc geninfo_unexecuted_blocks=1 00:06:10.399 00:06:10.399 ' 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.399 --rc genhtml_branch_coverage=1 00:06:10.399 --rc genhtml_function_coverage=1 00:06:10.399 --rc genhtml_legend=1 00:06:10.399 --rc geninfo_all_blocks=1 00:06:10.399 --rc geninfo_unexecuted_blocks=1 00:06:10.399 00:06:10.399 ' 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.399 --rc genhtml_branch_coverage=1 00:06:10.399 --rc genhtml_function_coverage=1 00:06:10.399 --rc genhtml_legend=1 00:06:10.399 --rc geninfo_all_blocks=1 00:06:10.399 --rc geninfo_unexecuted_blocks=1 00:06:10.399 00:06:10.399 ' 00:06:10.399 06:12:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.399 06:12:01 thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.399 ************************************ 00:06:10.399 START TEST thread_poller_perf 00:06:10.399 ************************************ 00:06:10.399 06:12:01 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:10.399 [2024-12-13 06:12:01.914673] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:10.399 [2024-12-13 06:12:01.914751] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786564 ] 00:06:10.399 [2024-12-13 06:12:01.991132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.399 [2024-12-13 06:12:02.013668] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.399 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:11.773 [2024-12-13T05:12:03.427Z] ====================================== 00:06:11.773 [2024-12-13T05:12:03.427Z] busy:2104331508 (cyc) 00:06:11.773 [2024-12-13T05:12:03.427Z] total_run_count: 416000 00:06:11.773 [2024-12-13T05:12:03.427Z] tsc_hz: 2100000000 (cyc) 00:06:11.773 [2024-12-13T05:12:03.427Z] ====================================== 00:06:11.773 [2024-12-13T05:12:03.427Z] poller_cost: 5058 (cyc), 2408 (nsec) 00:06:11.773 00:06:11.773 real 0m1.156s 00:06:11.773 user 0m1.079s 00:06:11.773 sys 0m0.073s 00:06:11.773 06:12:03 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.773 06:12:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:11.773 ************************************ 00:06:11.773 END TEST thread_poller_perf 00:06:11.773 ************************************ 00:06:11.773 06:12:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.773 06:12:03 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:11.773 06:12:03 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.773 06:12:03 thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.773 ************************************ 00:06:11.773 START TEST thread_poller_perf 00:06:11.773 ************************************ 00:06:11.773 06:12:03 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:11.773 [2024-12-13 06:12:03.144246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:11.773 [2024-12-13 06:12:03.144315] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786819 ] 00:06:11.773 [2024-12-13 06:12:03.222795] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.773 [2024-12-13 06:12:03.244862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.773 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:12.709 [2024-12-13T05:12:04.363Z] ====================================== 00:06:12.709 [2024-12-13T05:12:04.363Z] busy:2101348290 (cyc) 00:06:12.709 [2024-12-13T05:12:04.363Z] total_run_count: 5118000 00:06:12.709 [2024-12-13T05:12:04.363Z] tsc_hz: 2100000000 (cyc) 00:06:12.709 [2024-12-13T05:12:04.363Z] ====================================== 00:06:12.709 [2024-12-13T05:12:04.363Z] poller_cost: 410 (cyc), 195 (nsec) 00:06:12.709 00:06:12.709 real 0m1.156s 00:06:12.709 user 0m1.080s 00:06:12.709 sys 0m0.071s 00:06:12.709 06:12:04 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.709 06:12:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:12.709 ************************************ 00:06:12.709 END TEST thread_poller_perf 00:06:12.709 ************************************ 00:06:12.709 06:12:04 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:12.709 00:06:12.709 real 0m2.623s 00:06:12.709 user 0m2.317s 00:06:12.709 sys 0m0.318s 00:06:12.709 06:12:04 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.709 06:12:04 thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.709 ************************************ 00:06:12.709 END TEST thread 00:06:12.709 ************************************ 00:06:12.709 06:12:04 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:12.709 06:12:04 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.709 06:12:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.709 06:12:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.709 06:12:04 -- common/autotest_common.sh@10 -- # set +x 00:06:12.968 ************************************ 00:06:12.968 START TEST app_cmdline 00:06:12.968 ************************************ 00:06:12.968 06:12:04 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:12.968 * Looking for test storage... 00:06:12.968 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:12.968 06:12:04 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:12.968 06:12:04 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:12.968 06:12:04 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:12.968 06:12:04 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:12.968 06:12:04 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:12.969 06:12:04 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:12.969 06:12:04 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.969 --rc genhtml_branch_coverage=1 00:06:12.969 --rc genhtml_function_coverage=1 00:06:12.969 --rc genhtml_legend=1 00:06:12.969 --rc geninfo_all_blocks=1 00:06:12.969 --rc geninfo_unexecuted_blocks=1 00:06:12.969 00:06:12.969 ' 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.969 --rc genhtml_branch_coverage=1 00:06:12.969 --rc genhtml_function_coverage=1 00:06:12.969 --rc genhtml_legend=1 00:06:12.969 --rc geninfo_all_blocks=1 00:06:12.969 --rc geninfo_unexecuted_blocks=1 00:06:12.969 00:06:12.969 ' 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.969 --rc genhtml_branch_coverage=1 00:06:12.969 --rc genhtml_function_coverage=1 00:06:12.969 --rc genhtml_legend=1 00:06:12.969 --rc geninfo_all_blocks=1 00:06:12.969 --rc geninfo_unexecuted_blocks=1 00:06:12.969 00:06:12.969 ' 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:12.969 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:12.969 --rc genhtml_branch_coverage=1 00:06:12.969 --rc genhtml_function_coverage=1 00:06:12.969 --rc genhtml_legend=1 00:06:12.969 --rc geninfo_all_blocks=1 00:06:12.969 --rc geninfo_unexecuted_blocks=1 00:06:12.969 00:06:12.969 ' 00:06:12.969 06:12:04 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:12.969 06:12:04 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=787110 00:06:12.969 06:12:04 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 787110 00:06:12.969 06:12:04 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 787110 ']' 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.969 06:12:04 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:12.969 [2024-12-13 06:12:04.618280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:12.969 [2024-12-13 06:12:04.618328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid787110 ] 00:06:13.227 [2024-12-13 06:12:04.690491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.227 [2024-12-13 06:12:04.712328] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.486 06:12:04 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.486 06:12:04 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:13.486 06:12:04 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:13.486 { 00:06:13.486 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:13.486 "fields": { 00:06:13.486 "major": 25, 00:06:13.486 "minor": 1, 00:06:13.486 "patch": 0, 00:06:13.486 "suffix": "-pre", 00:06:13.486 "commit": "e01cb43b8" 00:06:13.486 } 00:06:13.486 } 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:13.486 06:12:05 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:13.486 06:12:05 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:13.486 06:12:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:13.486 06:12:05 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:13.745 06:12:05 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:13.745 06:12:05 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:13.745 06:12:05 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:13.745 request: 00:06:13.745 { 00:06:13.745 "method": "env_dpdk_get_mem_stats", 00:06:13.745 "req_id": 1 00:06:13.745 } 00:06:13.745 Got JSON-RPC error response 00:06:13.745 response: 00:06:13.745 { 00:06:13.745 "code": -32601, 00:06:13.745 "message": "Method not found" 00:06:13.745 } 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:13.745 06:12:05 app_cmdline -- app/cmdline.sh@1 -- # killprocess 787110 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 787110 ']' 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 787110 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:13.745 06:12:05 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 787110 00:06:14.003 06:12:05 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.003 06:12:05 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.003 06:12:05 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 787110' 00:06:14.003 killing process with pid 787110 00:06:14.003 06:12:05 app_cmdline -- common/autotest_common.sh@973 -- # kill 787110 00:06:14.003 06:12:05 app_cmdline -- common/autotest_common.sh@978 -- # wait 787110 00:06:14.262 00:06:14.262 real 0m1.311s 00:06:14.262 user 0m1.569s 00:06:14.262 sys 0m0.435s 00:06:14.262 06:12:05 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.262 06:12:05 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:14.262 ************************************ 00:06:14.262 END TEST app_cmdline 00:06:14.262 ************************************ 00:06:14.262 06:12:05 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.262 06:12:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.262 06:12:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.262 06:12:05 -- common/autotest_common.sh@10 -- # set +x 00:06:14.262 ************************************ 00:06:14.262 START TEST version 00:06:14.262 ************************************ 00:06:14.262 06:12:05 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:14.262 * Looking for test storage... 00:06:14.262 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:14.262 06:12:05 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.262 06:12:05 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.262 06:12:05 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.520 06:12:05 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.520 06:12:05 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.520 06:12:05 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.520 06:12:05 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.520 06:12:05 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.520 06:12:05 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.520 06:12:05 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.520 06:12:05 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.520 06:12:05 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.520 06:12:05 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.520 06:12:05 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.520 06:12:05 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.520 06:12:05 version -- scripts/common.sh@344 -- # case "$op" in 00:06:14.520 06:12:05 version -- scripts/common.sh@345 -- # : 1 00:06:14.520 06:12:05 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.520 06:12:05 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.521 06:12:05 version -- scripts/common.sh@365 -- # decimal 1 00:06:14.521 06:12:05 version -- scripts/common.sh@353 -- # local d=1 00:06:14.521 06:12:05 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.521 06:12:05 version -- scripts/common.sh@355 -- # echo 1 00:06:14.521 06:12:05 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.521 06:12:05 version -- scripts/common.sh@366 -- # decimal 2 00:06:14.521 06:12:05 version -- scripts/common.sh@353 -- # local d=2 00:06:14.521 06:12:05 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.521 06:12:05 version -- scripts/common.sh@355 -- # echo 2 00:06:14.521 06:12:05 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.521 06:12:05 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.521 06:12:05 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.521 06:12:05 version -- scripts/common.sh@368 -- # return 0 00:06:14.521 06:12:05 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.521 06:12:05 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.521 --rc genhtml_branch_coverage=1 00:06:14.521 --rc genhtml_function_coverage=1 00:06:14.521 --rc genhtml_legend=1 00:06:14.521 --rc geninfo_all_blocks=1 00:06:14.521 --rc geninfo_unexecuted_blocks=1 00:06:14.521 00:06:14.521 ' 00:06:14.521 06:12:05 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.521 --rc genhtml_branch_coverage=1 00:06:14.521 --rc genhtml_function_coverage=1 00:06:14.521 --rc genhtml_legend=1 00:06:14.521 --rc geninfo_all_blocks=1 00:06:14.521 --rc geninfo_unexecuted_blocks=1 00:06:14.521 00:06:14.521 ' 00:06:14.521 06:12:05 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.521 --rc genhtml_branch_coverage=1 00:06:14.521 --rc genhtml_function_coverage=1 00:06:14.521 --rc genhtml_legend=1 00:06:14.521 --rc geninfo_all_blocks=1 00:06:14.521 --rc geninfo_unexecuted_blocks=1 00:06:14.521 00:06:14.521 ' 00:06:14.521 06:12:05 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.521 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.521 --rc genhtml_branch_coverage=1 00:06:14.521 --rc genhtml_function_coverage=1 00:06:14.521 --rc genhtml_legend=1 00:06:14.521 --rc geninfo_all_blocks=1 00:06:14.521 --rc geninfo_unexecuted_blocks=1 00:06:14.521 00:06:14.521 ' 00:06:14.521 06:12:05 version -- app/version.sh@17 -- # get_header_version major 00:06:14.521 06:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.521 06:12:05 version -- app/version.sh@17 -- # major=25 00:06:14.521 06:12:05 version -- app/version.sh@18 -- # get_header_version minor 00:06:14.521 06:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.521 06:12:05 version -- app/version.sh@18 -- # minor=1 00:06:14.521 06:12:05 version -- app/version.sh@19 -- # get_header_version patch 00:06:14.521 06:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.521 06:12:05 version -- app/version.sh@19 -- # patch=0 00:06:14.521 06:12:05 version -- app/version.sh@20 -- # get_header_version suffix 00:06:14.521 06:12:05 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # cut -f2 00:06:14.521 06:12:05 version -- app/version.sh@14 -- # tr -d '"' 00:06:14.521 06:12:05 version -- app/version.sh@20 -- # suffix=-pre 00:06:14.521 06:12:05 version -- app/version.sh@22 -- # version=25.1 00:06:14.521 06:12:05 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:14.521 06:12:05 version -- app/version.sh@28 -- # version=25.1rc0 00:06:14.521 06:12:05 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:14.521 06:12:05 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:14.521 06:12:06 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:14.521 06:12:06 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:14.521 00:06:14.521 real 0m0.248s 00:06:14.521 user 0m0.157s 00:06:14.521 sys 0m0.135s 00:06:14.521 06:12:06 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.521 06:12:06 version -- common/autotest_common.sh@10 -- # set +x 00:06:14.521 ************************************ 00:06:14.521 END TEST version 00:06:14.521 ************************************ 00:06:14.521 06:12:06 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:14.521 06:12:06 -- spdk/autotest.sh@194 -- # uname -s 00:06:14.521 06:12:06 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:14.521 06:12:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.521 06:12:06 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:14.521 06:12:06 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:14.521 06:12:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:14.521 06:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.521 06:12:06 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:14.521 06:12:06 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:14.521 06:12:06 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.521 06:12:06 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.521 06:12:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.521 06:12:06 -- common/autotest_common.sh@10 -- # set +x 00:06:14.521 ************************************ 00:06:14.521 START TEST nvmf_tcp 00:06:14.521 ************************************ 00:06:14.521 06:12:06 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:14.780 * Looking for test storage... 00:06:14.780 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:14.780 06:12:06 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.780 --rc genhtml_branch_coverage=1 00:06:14.780 --rc genhtml_function_coverage=1 00:06:14.780 --rc genhtml_legend=1 00:06:14.780 --rc geninfo_all_blocks=1 00:06:14.780 --rc geninfo_unexecuted_blocks=1 00:06:14.780 00:06:14.780 ' 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.780 --rc genhtml_branch_coverage=1 00:06:14.780 --rc genhtml_function_coverage=1 00:06:14.780 --rc genhtml_legend=1 00:06:14.780 --rc geninfo_all_blocks=1 00:06:14.780 --rc geninfo_unexecuted_blocks=1 00:06:14.780 00:06:14.780 ' 00:06:14.780 06:12:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:14.780 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.780 --rc genhtml_branch_coverage=1 00:06:14.781 --rc genhtml_function_coverage=1 00:06:14.781 --rc genhtml_legend=1 00:06:14.781 --rc geninfo_all_blocks=1 00:06:14.781 --rc geninfo_unexecuted_blocks=1 00:06:14.781 00:06:14.781 ' 00:06:14.781 06:12:06 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:14.781 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:14.781 --rc genhtml_branch_coverage=1 00:06:14.781 --rc genhtml_function_coverage=1 00:06:14.781 --rc genhtml_legend=1 00:06:14.781 --rc geninfo_all_blocks=1 00:06:14.781 --rc geninfo_unexecuted_blocks=1 00:06:14.781 00:06:14.781 ' 00:06:14.781 06:12:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:14.781 06:12:06 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:14.781 06:12:06 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.781 06:12:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:14.781 06:12:06 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.781 06:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:14.781 ************************************ 00:06:14.781 START TEST nvmf_target_core 00:06:14.781 ************************************ 00:06:14.781 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:14.781 * Looking for test storage... 00:06:15.040 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.040 --rc genhtml_branch_coverage=1 00:06:15.040 --rc genhtml_function_coverage=1 00:06:15.040 --rc genhtml_legend=1 00:06:15.040 --rc geninfo_all_blocks=1 00:06:15.040 --rc geninfo_unexecuted_blocks=1 00:06:15.040 00:06:15.040 ' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.040 --rc genhtml_branch_coverage=1 00:06:15.040 --rc genhtml_function_coverage=1 00:06:15.040 --rc genhtml_legend=1 00:06:15.040 --rc geninfo_all_blocks=1 00:06:15.040 --rc geninfo_unexecuted_blocks=1 00:06:15.040 00:06:15.040 ' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.040 --rc genhtml_branch_coverage=1 00:06:15.040 --rc genhtml_function_coverage=1 00:06:15.040 --rc genhtml_legend=1 00:06:15.040 --rc geninfo_all_blocks=1 00:06:15.040 --rc geninfo_unexecuted_blocks=1 00:06:15.040 00:06:15.040 ' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.040 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.040 --rc genhtml_branch_coverage=1 00:06:15.040 --rc genhtml_function_coverage=1 00:06:15.040 --rc genhtml_legend=1 00:06:15.040 --rc geninfo_all_blocks=1 00:06:15.040 --rc geninfo_unexecuted_blocks=1 00:06:15.040 00:06:15.040 ' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.040 06:12:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.041 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:15.041 ************************************ 00:06:15.041 START TEST nvmf_abort 00:06:15.041 ************************************ 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:15.041 * Looking for test storage... 00:06:15.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:15.041 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.302 --rc genhtml_branch_coverage=1 00:06:15.302 --rc genhtml_function_coverage=1 00:06:15.302 --rc genhtml_legend=1 00:06:15.302 --rc geninfo_all_blocks=1 00:06:15.302 --rc geninfo_unexecuted_blocks=1 00:06:15.302 00:06:15.302 ' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.302 --rc genhtml_branch_coverage=1 00:06:15.302 --rc genhtml_function_coverage=1 00:06:15.302 --rc genhtml_legend=1 00:06:15.302 --rc geninfo_all_blocks=1 00:06:15.302 --rc geninfo_unexecuted_blocks=1 00:06:15.302 00:06:15.302 ' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.302 --rc genhtml_branch_coverage=1 00:06:15.302 --rc genhtml_function_coverage=1 00:06:15.302 --rc genhtml_legend=1 00:06:15.302 --rc geninfo_all_blocks=1 00:06:15.302 --rc geninfo_unexecuted_blocks=1 00:06:15.302 00:06:15.302 ' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:15.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:15.302 --rc genhtml_branch_coverage=1 00:06:15.302 --rc genhtml_function_coverage=1 00:06:15.302 --rc genhtml_legend=1 00:06:15.302 --rc geninfo_all_blocks=1 00:06:15.302 --rc geninfo_unexecuted_blocks=1 00:06:15.302 00:06:15.302 ' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:15.302 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:15.303 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:15.303 06:12:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:21.871 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:21.871 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.871 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:21.871 Found net devices under 0000:af:00.0: cvl_0_0 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:21.872 Found net devices under 0000:af:00.1: cvl_0_1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:21.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:21.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:06:21.872 00:06:21.872 --- 10.0.0.2 ping statistics --- 00:06:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.872 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:21.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:21.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:06:21.872 00:06:21.872 --- 10.0.0.1 ping statistics --- 00:06:21.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:21.872 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=791117 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 791117 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 791117 ']' 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.872 06:12:12 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 [2024-12-13 06:12:12.797626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:21.872 [2024-12-13 06:12:12.797671] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:21.872 [2024-12-13 06:12:12.878420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.872 [2024-12-13 06:12:12.901802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:21.872 [2024-12-13 06:12:12.901840] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:21.872 [2024-12-13 06:12:12.901846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:21.872 [2024-12-13 06:12:12.901852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:21.872 [2024-12-13 06:12:12.901856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:21.872 [2024-12-13 06:12:12.903041] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.872 [2024-12-13 06:12:12.903149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.872 [2024-12-13 06:12:12.903151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 [2024-12-13 06:12:13.041719] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 Malloc0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 Delay0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.872 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.873 [2024-12-13 06:12:13.123397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:21.873 06:12:13 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:21.873 [2024-12-13 06:12:13.215454] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:23.775 Initializing NVMe Controllers 00:06:23.775 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:23.775 controller IO queue size 128 less than required 00:06:23.775 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:23.775 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:23.775 Initialization complete. Launching workers. 00:06:23.775 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37544 00:06:23.775 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37605, failed to submit 62 00:06:23.775 success 37548, unsuccessful 57, failed 0 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:23.775 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:23.775 rmmod nvme_tcp 00:06:23.775 rmmod nvme_fabrics 00:06:23.775 rmmod nvme_keyring 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 791117 ']' 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 791117 ']' 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 791117' 00:06:24.034 killing process with pid 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 791117 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:24.034 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:24.292 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:24.292 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:24.293 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:24.293 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:24.293 06:12:15 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:26.198 00:06:26.198 real 0m11.166s 00:06:26.198 user 0m11.654s 00:06:26.198 sys 0m5.455s 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:26.198 ************************************ 00:06:26.198 END TEST nvmf_abort 00:06:26.198 ************************************ 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.198 06:12:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.198 ************************************ 00:06:26.198 START TEST nvmf_ns_hotplug_stress 00:06:26.198 ************************************ 00:06:26.199 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:26.458 * Looking for test storage... 00:06:26.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.458 06:12:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:26.458 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.459 --rc genhtml_branch_coverage=1 00:06:26.459 --rc genhtml_function_coverage=1 00:06:26.459 --rc genhtml_legend=1 00:06:26.459 --rc geninfo_all_blocks=1 00:06:26.459 --rc geninfo_unexecuted_blocks=1 00:06:26.459 00:06:26.459 ' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.459 --rc genhtml_branch_coverage=1 00:06:26.459 --rc genhtml_function_coverage=1 00:06:26.459 --rc genhtml_legend=1 00:06:26.459 --rc geninfo_all_blocks=1 00:06:26.459 --rc geninfo_unexecuted_blocks=1 00:06:26.459 00:06:26.459 ' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.459 --rc genhtml_branch_coverage=1 00:06:26.459 --rc genhtml_function_coverage=1 00:06:26.459 --rc genhtml_legend=1 00:06:26.459 --rc geninfo_all_blocks=1 00:06:26.459 --rc geninfo_unexecuted_blocks=1 00:06:26.459 00:06:26.459 ' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.459 --rc genhtml_branch_coverage=1 00:06:26.459 --rc genhtml_function_coverage=1 00:06:26.459 --rc genhtml_legend=1 00:06:26.459 --rc geninfo_all_blocks=1 00:06:26.459 --rc geninfo_unexecuted_blocks=1 00:06:26.459 00:06:26.459 ' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:26.459 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:26.459 06:12:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:33.034 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:33.035 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:33.035 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:33.035 Found net devices under 0000:af:00.0: cvl_0_0 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:33.035 Found net devices under 0000:af:00.1: cvl_0_1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:33.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:33.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:06:33.035 00:06:33.035 --- 10.0.0.2 ping statistics --- 00:06:33.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.035 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:33.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:33.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:06:33.035 00:06:33.035 --- 10.0.0.1 ping statistics --- 00:06:33.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:33.035 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:33.035 06:12:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=795081 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 795081 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 795081 ']' 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.035 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 [2024-12-13 06:12:24.077785] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:06:33.036 [2024-12-13 06:12:24.077832] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:33.036 [2024-12-13 06:12:24.156503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.036 [2024-12-13 06:12:24.178492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:33.036 [2024-12-13 06:12:24.178527] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:33.036 [2024-12-13 06:12:24.178534] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:33.036 [2024-12-13 06:12:24.178540] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:33.036 [2024-12-13 06:12:24.178544] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:33.036 [2024-12-13 06:12:24.183466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.036 [2024-12-13 06:12:24.183499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.036 [2024-12-13 06:12:24.183500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:33.036 [2024-12-13 06:12:24.487413] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.036 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:33.294 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:33.294 [2024-12-13 06:12:24.892890] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:33.294 06:12:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:33.560 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:33.820 Malloc0 00:06:33.820 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:34.078 Delay0 00:06:34.078 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:34.078 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:34.337 NULL1 00:06:34.337 06:12:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:34.595 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:34.595 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=795357 00:06:34.595 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:34.595 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.854 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.112 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:35.113 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:35.113 true 00:06:35.113 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:35.113 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:35.371 06:12:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:35.630 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:35.630 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:35.889 true 00:06:35.889 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:35.889 06:12:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:36.824 Read completed with error (sct=0, sc=11) 00:06:36.824 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:36.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.083 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:37.083 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:37.083 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:37.342 true 00:06:37.342 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:37.342 06:12:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.286 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:38.286 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:38.286 06:12:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:38.546 true 00:06:38.546 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:38.546 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:38.804 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:39.063 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:39.063 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:39.063 true 00:06:39.063 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:39.063 06:12:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.439 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.439 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:40.439 06:12:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:40.439 true 00:06:40.439 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:40.439 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:40.697 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:40.956 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:40.956 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:41.215 true 00:06:41.215 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:41.215 06:12:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:42.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.151 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:42.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.151 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.410 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:42.410 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:42.410 06:12:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:42.668 true 00:06:42.668 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:42.668 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.605 06:12:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:43.605 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:43.605 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:43.863 true 00:06:43.863 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:43.863 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:43.863 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.122 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:44.122 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:44.380 true 00:06:44.380 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:44.380 06:12:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:45.317 06:12:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:45.576 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:45.576 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:45.835 true 00:06:45.835 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:45.835 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:46.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:46.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:46.093 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:46.352 true 00:06:46.352 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:46.352 06:12:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:47.288 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:47.546 06:12:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.546 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:47.546 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:47.805 true 00:06:47.805 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:47.805 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.064 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.323 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:48.323 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:48.323 true 00:06:48.323 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:48.323 06:12:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.700 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:49.700 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:49.958 true 00:06:49.958 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:49.958 06:12:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:50.894 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.894 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:50.894 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:51.154 true 00:06:51.154 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:51.154 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.413 06:12:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.671 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:51.672 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:51.672 true 00:06:51.672 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:51.672 06:12:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.048 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.306 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:53.306 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:53.306 true 00:06:53.306 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:53.306 06:12:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.241 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:54.242 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:54.500 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:54.500 06:12:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:54.500 true 00:06:54.500 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:54.500 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:54.759 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.017 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:55.017 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:55.276 true 00:06:55.276 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:55.276 06:12:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.211 06:12:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.211 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:56.470 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:56.470 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:56.728 true 00:06:56.728 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:56.728 06:12:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.664 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.664 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:57.664 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:57.664 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:57.925 true 00:06:57.925 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:57.925 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.184 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.442 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:58.442 06:12:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:58.442 true 00:06:58.700 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:06:58.700 06:12:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.637 06:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.637 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.895 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.895 06:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:59.895 06:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:00.154 true 00:07:00.154 06:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:00.154 06:12:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.090 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:01.090 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.090 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:01.090 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:01.349 true 00:07:01.349 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:01.349 06:12:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.607 06:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.607 06:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:01.607 06:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:01.866 true 00:07:01.866 06:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:01.866 06:12:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.242 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:03.242 06:12:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:03.506 true 00:07:03.506 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:03.506 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.446 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:04.446 06:12:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.446 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:04.446 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:04.704 true 00:07:04.704 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:04.704 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.963 Initializing NVMe Controllers 00:07:04.963 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:04.963 Controller IO queue size 128, less than required. 00:07:04.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.963 Controller IO queue size 128, less than required. 00:07:04.963 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:04.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:04.963 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:04.963 Initialization complete. Launching workers. 00:07:04.963 ======================================================== 00:07:04.963 Latency(us) 00:07:04.963 Device Information : IOPS MiB/s Average min max 00:07:04.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1673.95 0.82 46768.37 2868.30 1131759.81 00:07:04.963 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16009.85 7.82 7975.96 1560.59 446866.45 00:07:04.963 ======================================================== 00:07:04.963 Total : 17683.80 8.63 11648.05 1560.59 1131759.81 00:07:04.963 00:07:04.963 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.222 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:05.222 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:05.222 true 00:07:05.222 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 795357 00:07:05.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (795357) - No such process 00:07:05.222 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 795357 00:07:05.222 06:12:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.481 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:05.739 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:05.739 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:05.739 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:05.739 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.739 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:05.998 null0 00:07:05.998 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:05.998 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:05.998 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:05.998 null1 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:06.256 null2 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.256 06:12:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:06.515 null3 00:07:06.515 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.515 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.515 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:06.773 null4 00:07:06.773 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:06.773 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:06.773 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:06.773 null5 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:07.032 null6 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.032 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:07.291 null7 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:07.291 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 800927 800930 800933 800936 800940 800943 800946 800949 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.292 06:12:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:07.551 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:07.811 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.070 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.071 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.071 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.071 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.071 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.071 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.330 06:12:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.588 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:08.847 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.106 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.107 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.365 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.365 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.365 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.365 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.365 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.366 06:13:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.625 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:09.884 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:09.885 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:09.885 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:09.885 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:09.885 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.143 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.144 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.402 06:13:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:10.661 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:10.921 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:11.180 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:11.438 06:13:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:11.438 rmmod nvme_tcp 00:07:11.438 rmmod nvme_fabrics 00:07:11.438 rmmod nvme_keyring 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 795081 ']' 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 795081 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 795081 ']' 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 795081 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 795081 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:11.438 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:11.439 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 795081' 00:07:11.439 killing process with pid 795081 00:07:11.439 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 795081 00:07:11.439 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 795081 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:11.698 06:13:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:14.248 00:07:14.248 real 0m47.491s 00:07:14.248 user 3m13.805s 00:07:14.248 sys 0m15.280s 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:14.248 ************************************ 00:07:14.248 END TEST nvmf_ns_hotplug_stress 00:07:14.248 ************************************ 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:14.248 ************************************ 00:07:14.248 START TEST nvmf_delete_subsystem 00:07:14.248 ************************************ 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:14.248 * Looking for test storage... 00:07:14.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:14.248 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:14.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.249 --rc genhtml_branch_coverage=1 00:07:14.249 --rc genhtml_function_coverage=1 00:07:14.249 --rc genhtml_legend=1 00:07:14.249 --rc geninfo_all_blocks=1 00:07:14.249 --rc geninfo_unexecuted_blocks=1 00:07:14.249 00:07:14.249 ' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:14.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.249 --rc genhtml_branch_coverage=1 00:07:14.249 --rc genhtml_function_coverage=1 00:07:14.249 --rc genhtml_legend=1 00:07:14.249 --rc geninfo_all_blocks=1 00:07:14.249 --rc geninfo_unexecuted_blocks=1 00:07:14.249 00:07:14.249 ' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:14.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.249 --rc genhtml_branch_coverage=1 00:07:14.249 --rc genhtml_function_coverage=1 00:07:14.249 --rc genhtml_legend=1 00:07:14.249 --rc geninfo_all_blocks=1 00:07:14.249 --rc geninfo_unexecuted_blocks=1 00:07:14.249 00:07:14.249 ' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:14.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:14.249 --rc genhtml_branch_coverage=1 00:07:14.249 --rc genhtml_function_coverage=1 00:07:14.249 --rc genhtml_legend=1 00:07:14.249 --rc geninfo_all_blocks=1 00:07:14.249 --rc geninfo_unexecuted_blocks=1 00:07:14.249 00:07:14.249 ' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:14.249 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:14.249 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:14.250 06:13:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:19.730 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:19.730 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:19.730 Found net devices under 0000:af:00.0: cvl_0_0 00:07:19.730 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:19.731 Found net devices under 0000:af:00.1: cvl_0_1 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:19.731 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:19.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:19.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:07:19.990 00:07:19.990 --- 10.0.0.2 ping statistics --- 00:07:19.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.990 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:19.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:19.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.174 ms 00:07:19.990 00:07:19.990 --- 10.0.0.1 ping statistics --- 00:07:19.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:19.990 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:19.990 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=805334 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 805334 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 805334 ']' 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.991 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:19.991 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.250 [2024-12-13 06:13:11.682346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:20.250 [2024-12-13 06:13:11.682398] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.250 [2024-12-13 06:13:11.761165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:20.250 [2024-12-13 06:13:11.783559] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:20.250 [2024-12-13 06:13:11.783595] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:20.250 [2024-12-13 06:13:11.783602] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:20.250 [2024-12-13 06:13:11.783608] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:20.250 [2024-12-13 06:13:11.783614] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:20.250 [2024-12-13 06:13:11.784743] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:20.250 [2024-12-13 06:13:11.784745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.250 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.250 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:20.250 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:20.250 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:20.250 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 [2024-12-13 06:13:11.924917] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 [2024-12-13 06:13:11.945130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 NULL1 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 Delay0 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=805364 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:20.509 06:13:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:20.509 [2024-12-13 06:13:12.056015] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:22.413 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:22.413 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.413 06:13:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 starting I/O failed: -6 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 [2024-12-13 06:13:14.171112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd400 is same with the state(6) to be set 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Write completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.672 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 [2024-12-13 06:13:14.171598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd5e0 is same with the state(6) to be set 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 starting I/O failed: -6 00:07:22.673 [2024-12-13 06:13:14.175801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f4568000c80 is same with the state(6) to be set 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:22.673 Write completed with error (sct=0, sc=8) 00:07:22.673 Read completed with error (sct=0, sc=8) 00:07:23.608 [2024-12-13 06:13:15.149944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecb190 is same with the state(6) to be set 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 [2024-12-13 06:13:15.174269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeccf70 is same with the state(6) to be set 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 [2024-12-13 06:13:15.174562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xecd7c0 is same with the state(6) to be set 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 [2024-12-13 06:13:15.177022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f456800d060 is same with the state(6) to be set 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Read completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 Write completed with error (sct=0, sc=8) 00:07:23.608 [2024-12-13 06:13:15.178971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f456800d6c0 is same with the state(6) to be set 00:07:23.608 Initializing NVMe Controllers 00:07:23.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:23.608 Controller IO queue size 128, less than required. 00:07:23.608 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:23.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:23.609 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:23.609 Initialization complete. Launching workers. 00:07:23.609 ======================================================== 00:07:23.609 Latency(us) 00:07:23.609 Device Information : IOPS MiB/s Average min max 00:07:23.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.27 0.08 894043.37 487.86 1006404.88 00:07:23.609 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.83 0.08 931491.76 241.27 2002632.46 00:07:23.609 ======================================================== 00:07:23.609 Total : 326.10 0.16 912052.90 241.27 2002632.46 00:07:23.609 00:07:23.609 [2024-12-13 06:13:15.179530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xecb190 (9): Bad file descriptor 00:07:23.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:23.609 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.609 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:23.609 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805364 00:07:23.609 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 805364 00:07:24.176 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (805364) - No such process 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 805364 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 805364 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 805364 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.176 [2024-12-13 06:13:15.709384] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=806030 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:24.176 06:13:15 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:24.176 [2024-12-13 06:13:15.797664] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:24.743 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:24.743 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:24.743 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.310 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.310 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:25.310 06:13:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:25.876 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:25.876 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:25.876 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.134 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.134 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:26.134 06:13:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:26.700 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:26.700 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:26.700 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.267 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.267 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:27.267 06:13:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:27.526 Initializing NVMe Controllers 00:07:27.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:27.526 Controller IO queue size 128, less than required. 00:07:27.526 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:27.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:27.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:27.526 Initialization complete. Launching workers. 00:07:27.526 ======================================================== 00:07:27.526 Latency(us) 00:07:27.526 Device Information : IOPS MiB/s Average min max 00:07:27.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003276.97 1000171.71 1043099.47 00:07:27.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003705.91 1000152.94 1040828.47 00:07:27.526 ======================================================== 00:07:27.526 Total : 256.00 0.12 1003491.44 1000152.94 1043099.47 00:07:27.526 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 806030 00:07:27.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (806030) - No such process 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 806030 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:27.785 rmmod nvme_tcp 00:07:27.785 rmmod nvme_fabrics 00:07:27.785 rmmod nvme_keyring 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 805334 ']' 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 805334 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 805334 ']' 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 805334 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 805334 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 805334' 00:07:27.785 killing process with pid 805334 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 805334 00:07:27.785 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 805334 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:28.044 06:13:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.949 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:29.949 00:07:29.949 real 0m16.201s 00:07:29.949 user 0m29.224s 00:07:29.949 sys 0m5.455s 00:07:29.949 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.949 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:29.949 ************************************ 00:07:29.949 END TEST nvmf_delete_subsystem 00:07:29.949 ************************************ 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.208 ************************************ 00:07:30.208 START TEST nvmf_host_management 00:07:30.208 ************************************ 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:30.208 * Looking for test storage... 00:07:30.208 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.208 --rc genhtml_branch_coverage=1 00:07:30.208 --rc genhtml_function_coverage=1 00:07:30.208 --rc genhtml_legend=1 00:07:30.208 --rc geninfo_all_blocks=1 00:07:30.208 --rc geninfo_unexecuted_blocks=1 00:07:30.208 00:07:30.208 ' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.208 --rc genhtml_branch_coverage=1 00:07:30.208 --rc genhtml_function_coverage=1 00:07:30.208 --rc genhtml_legend=1 00:07:30.208 --rc geninfo_all_blocks=1 00:07:30.208 --rc geninfo_unexecuted_blocks=1 00:07:30.208 00:07:30.208 ' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.208 --rc genhtml_branch_coverage=1 00:07:30.208 --rc genhtml_function_coverage=1 00:07:30.208 --rc genhtml_legend=1 00:07:30.208 --rc geninfo_all_blocks=1 00:07:30.208 --rc geninfo_unexecuted_blocks=1 00:07:30.208 00:07:30.208 ' 00:07:30.208 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:30.208 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.208 --rc genhtml_branch_coverage=1 00:07:30.208 --rc genhtml_function_coverage=1 00:07:30.208 --rc genhtml_legend=1 00:07:30.208 --rc geninfo_all_blocks=1 00:07:30.209 --rc geninfo_unexecuted_blocks=1 00:07:30.209 00:07:30.209 ' 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:30.209 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:30.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:30.468 06:13:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:37.039 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:37.039 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:37.039 Found net devices under 0000:af:00.0: cvl_0_0 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:37.039 Found net devices under 0000:af:00.1: cvl_0_1 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:37.039 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:37.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:37.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:07:37.040 00:07:37.040 --- 10.0.0.2 ping statistics --- 00:07:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.040 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:37.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:37.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:07:37.040 00:07:37.040 --- 10.0.0.1 ping statistics --- 00:07:37.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:37.040 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=810180 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 810180 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810180 ']' 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.040 06:13:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 [2024-12-13 06:13:27.893202] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:37.040 [2024-12-13 06:13:27.893250] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:37.040 [2024-12-13 06:13:27.975114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:37.040 [2024-12-13 06:13:27.999543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:37.040 [2024-12-13 06:13:27.999583] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:37.040 [2024-12-13 06:13:27.999591] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:37.040 [2024-12-13 06:13:27.999597] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:37.040 [2024-12-13 06:13:27.999603] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:37.040 [2024-12-13 06:13:28.001132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.040 [2024-12-13 06:13:28.001240] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:37.040 [2024-12-13 06:13:28.001347] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:37.040 [2024-12-13 06:13:28.001352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 [2024-12-13 06:13:28.141593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 Malloc0 00:07:37.040 [2024-12-13 06:13:28.219485] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=810229 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 810229 /var/tmp/bdevperf.sock 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 810229 ']' 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:37.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:37.040 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:37.040 { 00:07:37.040 "params": { 00:07:37.040 "name": "Nvme$subsystem", 00:07:37.040 "trtype": "$TEST_TRANSPORT", 00:07:37.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:37.040 "adrfam": "ipv4", 00:07:37.040 "trsvcid": "$NVMF_PORT", 00:07:37.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:37.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:37.041 "hdgst": ${hdgst:-false}, 00:07:37.041 "ddgst": ${ddgst:-false} 00:07:37.041 }, 00:07:37.041 "method": "bdev_nvme_attach_controller" 00:07:37.041 } 00:07:37.041 EOF 00:07:37.041 )") 00:07:37.041 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:37.041 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:37.041 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:37.041 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:37.041 "params": { 00:07:37.041 "name": "Nvme0", 00:07:37.041 "trtype": "tcp", 00:07:37.041 "traddr": "10.0.0.2", 00:07:37.041 "adrfam": "ipv4", 00:07:37.041 "trsvcid": "4420", 00:07:37.041 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:37.041 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:37.041 "hdgst": false, 00:07:37.041 "ddgst": false 00:07:37.041 }, 00:07:37.041 "method": "bdev_nvme_attach_controller" 00:07:37.041 }' 00:07:37.041 [2024-12-13 06:13:28.312515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:37.041 [2024-12-13 06:13:28.312560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810229 ] 00:07:37.041 [2024-12-13 06:13:28.389453] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.041 [2024-12-13 06:13:28.411619] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.041 Running I/O for 10 seconds... 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=78 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 78 -ge 100 ']' 00:07:37.300 06:13:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.560 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.560 [2024-12-13 06:13:29.082419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbabe0 is same with the state(6) to be set 00:07:37.560 [2024-12-13 06:13:29.082500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbabe0 is same with the state(6) to be set 00:07:37.560 [2024-12-13 06:13:29.082699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.560 [2024-12-13 06:13:29.082851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.560 [2024-12-13 06:13:29.082858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.082985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.082992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.561 [2024-12-13 06:13:29.083474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.561 [2024-12-13 06:13:29.083483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:37.562 [2024-12-13 06:13:29.083720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:37.562 [2024-12-13 06:13:29.083728] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2084f50 is same with the state(6) to be set 00:07:37.562 [2024-12-13 06:13:29.084661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:37.562 task offset: 103424 on job bdev=Nvme0n1 fails 00:07:37.562 00:07:37.562 Latency(us) 00:07:37.562 [2024-12-13T05:13:29.216Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:37.562 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:37.562 Job: Nvme0n1 ended in about 0.40 seconds with error 00:07:37.562 Verification LBA range: start 0x0 length 0x400 00:07:37.562 Nvme0n1 : 0.40 1929.19 120.57 160.77 0.00 29801.44 1638.40 27213.04 00:07:37.562 [2024-12-13T05:13:29.216Z] =================================================================================================================== 00:07:37.562 [2024-12-13T05:13:29.216Z] Total : 1929.19 120.57 160.77 0.00 29801.44 1638.40 27213.04 00:07:37.562 [2024-12-13 06:13:29.087014] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:37.562 [2024-12-13 06:13:29.087036] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2071490 (9): Bad file descriptor 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:37.562 [2024-12-13 06:13:29.093862] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.562 06:13:29 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 810229 00:07:38.498 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (810229) - No such process 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:38.498 { 00:07:38.498 "params": { 00:07:38.498 "name": "Nvme$subsystem", 00:07:38.498 "trtype": "$TEST_TRANSPORT", 00:07:38.498 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:38.498 "adrfam": "ipv4", 00:07:38.498 "trsvcid": "$NVMF_PORT", 00:07:38.498 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:38.498 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:38.498 "hdgst": ${hdgst:-false}, 00:07:38.498 "ddgst": ${ddgst:-false} 00:07:38.498 }, 00:07:38.498 "method": "bdev_nvme_attach_controller" 00:07:38.498 } 00:07:38.498 EOF 00:07:38.498 )") 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:38.498 06:13:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:38.498 "params": { 00:07:38.498 "name": "Nvme0", 00:07:38.498 "trtype": "tcp", 00:07:38.498 "traddr": "10.0.0.2", 00:07:38.498 "adrfam": "ipv4", 00:07:38.498 "trsvcid": "4420", 00:07:38.498 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:38.498 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:38.498 "hdgst": false, 00:07:38.498 "ddgst": false 00:07:38.498 }, 00:07:38.498 "method": "bdev_nvme_attach_controller" 00:07:38.498 }' 00:07:38.498 [2024-12-13 06:13:30.146306] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:38.498 [2024-12-13 06:13:30.146350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid810480 ] 00:07:38.757 [2024-12-13 06:13:30.222507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.757 [2024-12-13 06:13:30.245068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.757 Running I/O for 1 seconds... 00:07:40.134 2048.00 IOPS, 128.00 MiB/s 00:07:40.134 Latency(us) 00:07:40.134 [2024-12-13T05:13:31.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.134 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:40.134 Verification LBA range: start 0x0 length 0x400 00:07:40.134 Nvme0n1 : 1.03 2055.29 128.46 0.00 0.00 30655.57 4899.60 27088.21 00:07:40.134 [2024-12-13T05:13:31.788Z] =================================================================================================================== 00:07:40.134 [2024-12-13T05:13:31.788Z] Total : 2055.29 128.46 0.00 0.00 30655.57 4899.60 27088.21 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:40.134 rmmod nvme_tcp 00:07:40.134 rmmod nvme_fabrics 00:07:40.134 rmmod nvme_keyring 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 810180 ']' 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 810180 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 810180 ']' 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 810180 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 810180 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 810180' 00:07:40.134 killing process with pid 810180 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 810180 00:07:40.134 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 810180 00:07:40.393 [2024-12-13 06:13:31.864341] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:40.394 06:13:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.929 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:42.929 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:42.929 00:07:42.929 real 0m12.286s 00:07:42.929 user 0m19.319s 00:07:42.929 sys 0m5.579s 00:07:42.929 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.929 06:13:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:42.929 ************************************ 00:07:42.929 END TEST nvmf_host_management 00:07:42.929 ************************************ 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:42.929 ************************************ 00:07:42.929 START TEST nvmf_lvol 00:07:42.929 ************************************ 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:42.929 * Looking for test storage... 00:07:42.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:42.929 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.930 --rc genhtml_branch_coverage=1 00:07:42.930 --rc genhtml_function_coverage=1 00:07:42.930 --rc genhtml_legend=1 00:07:42.930 --rc geninfo_all_blocks=1 00:07:42.930 --rc geninfo_unexecuted_blocks=1 00:07:42.930 00:07:42.930 ' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.930 --rc genhtml_branch_coverage=1 00:07:42.930 --rc genhtml_function_coverage=1 00:07:42.930 --rc genhtml_legend=1 00:07:42.930 --rc geninfo_all_blocks=1 00:07:42.930 --rc geninfo_unexecuted_blocks=1 00:07:42.930 00:07:42.930 ' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.930 --rc genhtml_branch_coverage=1 00:07:42.930 --rc genhtml_function_coverage=1 00:07:42.930 --rc genhtml_legend=1 00:07:42.930 --rc geninfo_all_blocks=1 00:07:42.930 --rc geninfo_unexecuted_blocks=1 00:07:42.930 00:07:42.930 ' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:42.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:42.930 --rc genhtml_branch_coverage=1 00:07:42.930 --rc genhtml_function_coverage=1 00:07:42.930 --rc genhtml_legend=1 00:07:42.930 --rc geninfo_all_blocks=1 00:07:42.930 --rc geninfo_unexecuted_blocks=1 00:07:42.930 00:07:42.930 ' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:42.930 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:42.930 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:42.931 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:42.931 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:42.931 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:42.931 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:42.931 06:13:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:48.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:48.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:48.202 Found net devices under 0000:af:00.0: cvl_0_0 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:48.202 Found net devices under 0000:af:00.1: cvl_0_1 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:48.202 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:48.203 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:48.462 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:48.462 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:48.462 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:48.462 06:13:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:48.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:48.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.290 ms 00:07:48.462 00:07:48.462 --- 10.0.0.2 ping statistics --- 00:07:48.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.462 rtt min/avg/max/mdev = 0.290/0.290/0.290/0.000 ms 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:48.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:48.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:07:48.462 00:07:48.462 --- 10.0.0.1 ping statistics --- 00:07:48.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:48.462 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=814391 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 814391 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 814391 ']' 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:48.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:48.462 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.720 [2024-12-13 06:13:40.153174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:07:48.720 [2024-12-13 06:13:40.153217] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:48.720 [2024-12-13 06:13:40.228732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:48.720 [2024-12-13 06:13:40.250678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:48.720 [2024-12-13 06:13:40.250718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:48.720 [2024-12-13 06:13:40.250725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:48.720 [2024-12-13 06:13:40.250731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:48.720 [2024-12-13 06:13:40.250737] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:48.720 [2024-12-13 06:13:40.252068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:48.720 [2024-12-13 06:13:40.252177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.720 [2024-12-13 06:13:40.252178] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:48.720 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:48.720 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:48.720 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:48.720 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:48.720 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:48.978 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:48.978 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:48.978 [2024-12-13 06:13:40.548230] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:48.978 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:49.237 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:49.237 06:13:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:49.495 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:49.495 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:49.754 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:50.013 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f640972a-7fe4-44e7-a0e5-6d23c504a17a 00:07:50.013 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f640972a-7fe4-44e7-a0e5-6d23c504a17a lvol 20 00:07:50.013 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=f2dea44a-42ee-44e5-9b47-998126738b8e 00:07:50.013 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:50.271 06:13:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f2dea44a-42ee-44e5-9b47-998126738b8e 00:07:50.530 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:50.788 [2024-12-13 06:13:42.195778] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.788 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:50.788 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:50.788 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=814667 00:07:50.788 06:13:42 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:52.166 06:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot f2dea44a-42ee-44e5-9b47-998126738b8e MY_SNAPSHOT 00:07:52.166 06:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=19e66007-b94a-4ef6-8946-7e71a824be9e 00:07:52.166 06:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize f2dea44a-42ee-44e5-9b47-998126738b8e 30 00:07:52.425 06:13:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 19e66007-b94a-4ef6-8946-7e71a824be9e MY_CLONE 00:07:52.683 06:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=777eef97-ed0a-4663-8e64-2ffca11ef2e7 00:07:52.683 06:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 777eef97-ed0a-4663-8e64-2ffca11ef2e7 00:07:53.251 06:13:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 814667 00:08:01.369 Initializing NVMe Controllers 00:08:01.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:01.369 Controller IO queue size 128, less than required. 00:08:01.369 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:01.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:01.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:01.369 Initialization complete. Launching workers. 00:08:01.369 ======================================================== 00:08:01.369 Latency(us) 00:08:01.369 Device Information : IOPS MiB/s Average min max 00:08:01.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12180.40 47.58 10510.75 1530.08 55893.77 00:08:01.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12324.80 48.14 10386.88 3441.65 61228.85 00:08:01.369 ======================================================== 00:08:01.369 Total : 24505.20 95.72 10448.45 1530.08 61228.85 00:08:01.369 00:08:01.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:01.369 06:13:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f2dea44a-42ee-44e5-9b47-998126738b8e 00:08:01.628 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f640972a-7fe4-44e7-a0e5-6d23c504a17a 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:01.887 rmmod nvme_tcp 00:08:01.887 rmmod nvme_fabrics 00:08:01.887 rmmod nvme_keyring 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 814391 ']' 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 814391 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 814391 ']' 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 814391 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 814391 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 814391' 00:08:01.887 killing process with pid 814391 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 814391 00:08:01.887 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 814391 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:02.146 06:13:53 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:04.682 00:08:04.682 real 0m21.716s 00:08:04.682 user 1m2.792s 00:08:04.682 sys 0m7.463s 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:04.682 ************************************ 00:08:04.682 END TEST nvmf_lvol 00:08:04.682 ************************************ 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:04.682 ************************************ 00:08:04.682 START TEST nvmf_lvs_grow 00:08:04.682 ************************************ 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:04.682 * Looking for test storage... 00:08:04.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:04.682 06:13:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:04.682 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:04.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.683 --rc genhtml_branch_coverage=1 00:08:04.683 --rc genhtml_function_coverage=1 00:08:04.683 --rc genhtml_legend=1 00:08:04.683 --rc geninfo_all_blocks=1 00:08:04.683 --rc geninfo_unexecuted_blocks=1 00:08:04.683 00:08:04.683 ' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:04.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.683 --rc genhtml_branch_coverage=1 00:08:04.683 --rc genhtml_function_coverage=1 00:08:04.683 --rc genhtml_legend=1 00:08:04.683 --rc geninfo_all_blocks=1 00:08:04.683 --rc geninfo_unexecuted_blocks=1 00:08:04.683 00:08:04.683 ' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:04.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.683 --rc genhtml_branch_coverage=1 00:08:04.683 --rc genhtml_function_coverage=1 00:08:04.683 --rc genhtml_legend=1 00:08:04.683 --rc geninfo_all_blocks=1 00:08:04.683 --rc geninfo_unexecuted_blocks=1 00:08:04.683 00:08:04.683 ' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:04.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:04.683 --rc genhtml_branch_coverage=1 00:08:04.683 --rc genhtml_function_coverage=1 00:08:04.683 --rc genhtml_legend=1 00:08:04.683 --rc geninfo_all_blocks=1 00:08:04.683 --rc geninfo_unexecuted_blocks=1 00:08:04.683 00:08:04.683 ' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:04.683 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:04.683 06:13:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:11.253 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:11.254 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:11.254 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:11.254 Found net devices under 0000:af:00.0: cvl_0_0 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:11.254 Found net devices under 0000:af:00.1: cvl_0_1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:11.254 06:14:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:11.254 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:11.254 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.443 ms 00:08:11.254 00:08:11.254 --- 10.0.0.2 ping statistics --- 00:08:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.254 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:11.254 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:11.254 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:08:11.254 00:08:11.254 --- 10.0.0.1 ping statistics --- 00:08:11.254 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:11.254 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=820145 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 820145 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 820145 ']' 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.254 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.254 [2024-12-13 06:14:02.121170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:11.254 [2024-12-13 06:14:02.121221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:11.254 [2024-12-13 06:14:02.200804] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:11.254 [2024-12-13 06:14:02.222510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:11.254 [2024-12-13 06:14:02.222544] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:11.254 [2024-12-13 06:14:02.222551] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:11.254 [2024-12-13 06:14:02.222557] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:11.255 [2024-12-13 06:14:02.222562] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:11.255 [2024-12-13 06:14:02.223023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:11.255 [2024-12-13 06:14:02.515313] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.255 ************************************ 00:08:11.255 START TEST lvs_grow_clean 00:08:11.255 ************************************ 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:11.255 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:11.514 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:11.514 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:11.514 06:14:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:11.514 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:11.514 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:11.514 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 78a53791-e93d-4166-85fb-e2e67a32ab7d lvol 150 00:08:11.772 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=a6a2fed1-88c8-4637-b030-41787723e0d1 00:08:11.772 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:11.773 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:12.031 [2024-12-13 06:14:03.512258] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:12.031 [2024-12-13 06:14:03.512307] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:12.031 true 00:08:12.031 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:12.032 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:12.290 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:12.290 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:12.290 06:14:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a6a2fed1-88c8-4637-b030-41787723e0d1 00:08:12.549 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.808 [2024-12-13 06:14:04.246459] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=820525 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 820525 /var/tmp/bdevperf.sock 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 820525 ']' 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:12.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:12.808 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:13.068 [2024-12-13 06:14:04.492174] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:13.068 [2024-12-13 06:14:04.492221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid820525 ] 00:08:13.068 [2024-12-13 06:14:04.568715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.068 [2024-12-13 06:14:04.591211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:13.068 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:13.068 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:13.068 06:14:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:13.637 Nvme0n1 00:08:13.637 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:13.637 [ 00:08:13.637 { 00:08:13.637 "name": "Nvme0n1", 00:08:13.637 "aliases": [ 00:08:13.637 "a6a2fed1-88c8-4637-b030-41787723e0d1" 00:08:13.637 ], 00:08:13.637 "product_name": "NVMe disk", 00:08:13.637 "block_size": 4096, 00:08:13.637 "num_blocks": 38912, 00:08:13.637 "uuid": "a6a2fed1-88c8-4637-b030-41787723e0d1", 00:08:13.637 "numa_id": 1, 00:08:13.637 "assigned_rate_limits": { 00:08:13.637 "rw_ios_per_sec": 0, 00:08:13.637 "rw_mbytes_per_sec": 0, 00:08:13.637 "r_mbytes_per_sec": 0, 00:08:13.637 "w_mbytes_per_sec": 0 00:08:13.637 }, 00:08:13.637 "claimed": false, 00:08:13.637 "zoned": false, 00:08:13.637 "supported_io_types": { 00:08:13.637 "read": true, 00:08:13.637 "write": true, 00:08:13.637 "unmap": true, 00:08:13.637 "flush": true, 00:08:13.637 "reset": true, 00:08:13.637 "nvme_admin": true, 00:08:13.637 "nvme_io": true, 00:08:13.637 "nvme_io_md": false, 00:08:13.637 "write_zeroes": true, 00:08:13.637 "zcopy": false, 00:08:13.637 "get_zone_info": false, 00:08:13.637 "zone_management": false, 00:08:13.637 "zone_append": false, 00:08:13.637 "compare": true, 00:08:13.637 "compare_and_write": true, 00:08:13.637 "abort": true, 00:08:13.637 "seek_hole": false, 00:08:13.637 "seek_data": false, 00:08:13.637 "copy": true, 00:08:13.637 "nvme_iov_md": false 00:08:13.637 }, 00:08:13.637 "memory_domains": [ 00:08:13.637 { 00:08:13.637 "dma_device_id": "system", 00:08:13.637 "dma_device_type": 1 00:08:13.637 } 00:08:13.637 ], 00:08:13.637 "driver_specific": { 00:08:13.637 "nvme": [ 00:08:13.637 { 00:08:13.637 "trid": { 00:08:13.637 "trtype": "TCP", 00:08:13.637 "adrfam": "IPv4", 00:08:13.637 "traddr": "10.0.0.2", 00:08:13.637 "trsvcid": "4420", 00:08:13.637 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:13.637 }, 00:08:13.637 "ctrlr_data": { 00:08:13.637 "cntlid": 1, 00:08:13.637 "vendor_id": "0x8086", 00:08:13.637 "model_number": "SPDK bdev Controller", 00:08:13.637 "serial_number": "SPDK0", 00:08:13.637 "firmware_revision": "25.01", 00:08:13.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:13.637 "oacs": { 00:08:13.637 "security": 0, 00:08:13.637 "format": 0, 00:08:13.637 "firmware": 0, 00:08:13.637 "ns_manage": 0 00:08:13.637 }, 00:08:13.637 "multi_ctrlr": true, 00:08:13.637 "ana_reporting": false 00:08:13.637 }, 00:08:13.637 "vs": { 00:08:13.637 "nvme_version": "1.3" 00:08:13.637 }, 00:08:13.637 "ns_data": { 00:08:13.637 "id": 1, 00:08:13.637 "can_share": true 00:08:13.637 } 00:08:13.637 } 00:08:13.637 ], 00:08:13.637 "mp_policy": "active_passive" 00:08:13.637 } 00:08:13.637 } 00:08:13.637 ] 00:08:13.897 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=820655 00:08:13.897 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:13.897 06:14:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:13.897 Running I/O for 10 seconds... 00:08:14.833 Latency(us) 00:08:14.833 [2024-12-13T05:14:06.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:14.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:14.833 Nvme0n1 : 1.00 23451.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:14.833 [2024-12-13T05:14:06.487Z] =================================================================================================================== 00:08:14.833 [2024-12-13T05:14:06.487Z] Total : 23451.00 91.61 0.00 0.00 0.00 0.00 0.00 00:08:14.833 00:08:15.769 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:15.769 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:15.769 Nvme0n1 : 2.00 23559.50 92.03 0.00 0.00 0.00 0.00 0.00 00:08:15.769 [2024-12-13T05:14:07.423Z] =================================================================================================================== 00:08:15.769 [2024-12-13T05:14:07.423Z] Total : 23559.50 92.03 0.00 0.00 0.00 0.00 0.00 00:08:15.769 00:08:16.028 true 00:08:16.028 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:16.028 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:16.286 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:16.286 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:16.287 06:14:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 820655 00:08:16.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:16.854 Nvme0n1 : 3.00 23562.67 92.04 0.00 0.00 0.00 0.00 0.00 00:08:16.854 [2024-12-13T05:14:08.508Z] =================================================================================================================== 00:08:16.854 [2024-12-13T05:14:08.508Z] Total : 23562.67 92.04 0.00 0.00 0.00 0.00 0.00 00:08:16.854 00:08:17.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:17.790 Nvme0n1 : 4.00 23641.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:17.790 [2024-12-13T05:14:09.444Z] =================================================================================================================== 00:08:17.790 [2024-12-13T05:14:09.444Z] Total : 23641.00 92.35 0.00 0.00 0.00 0.00 0.00 00:08:17.790 00:08:19.168 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:19.168 Nvme0n1 : 5.00 23601.00 92.19 0.00 0.00 0.00 0.00 0.00 00:08:19.168 [2024-12-13T05:14:10.822Z] =================================================================================================================== 00:08:19.168 [2024-12-13T05:14:10.822Z] Total : 23601.00 92.19 0.00 0.00 0.00 0.00 0.00 00:08:19.168 00:08:20.104 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:20.104 Nvme0n1 : 6.00 23635.00 92.32 0.00 0.00 0.00 0.00 0.00 00:08:20.104 [2024-12-13T05:14:11.758Z] =================================================================================================================== 00:08:20.104 [2024-12-13T05:14:11.758Z] Total : 23635.00 92.32 0.00 0.00 0.00 0.00 0.00 00:08:20.104 00:08:21.040 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.040 Nvme0n1 : 7.00 23688.43 92.53 0.00 0.00 0.00 0.00 0.00 00:08:21.040 [2024-12-13T05:14:12.694Z] =================================================================================================================== 00:08:21.040 [2024-12-13T05:14:12.694Z] Total : 23688.43 92.53 0.00 0.00 0.00 0.00 0.00 00:08:21.040 00:08:21.976 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:21.976 Nvme0n1 : 8.00 23721.62 92.66 0.00 0.00 0.00 0.00 0.00 00:08:21.976 [2024-12-13T05:14:13.630Z] =================================================================================================================== 00:08:21.976 [2024-12-13T05:14:13.630Z] Total : 23721.62 92.66 0.00 0.00 0.00 0.00 0.00 00:08:21.976 00:08:22.912 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:22.912 Nvme0n1 : 9.00 23737.11 92.72 0.00 0.00 0.00 0.00 0.00 00:08:22.912 [2024-12-13T05:14:14.566Z] =================================================================================================================== 00:08:22.912 [2024-12-13T05:14:14.566Z] Total : 23737.11 92.72 0.00 0.00 0.00 0.00 0.00 00:08:22.912 00:08:23.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.848 Nvme0n1 : 10.00 23764.50 92.83 0.00 0.00 0.00 0.00 0.00 00:08:23.848 [2024-12-13T05:14:15.502Z] =================================================================================================================== 00:08:23.848 [2024-12-13T05:14:15.502Z] Total : 23764.50 92.83 0.00 0.00 0.00 0.00 0.00 00:08:23.848 00:08:23.848 00:08:23.848 Latency(us) 00:08:23.848 [2024-12-13T05:14:15.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.848 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:23.848 Nvme0n1 : 10.00 23762.14 92.82 0.00 0.00 5383.34 2340.57 11921.31 00:08:23.848 [2024-12-13T05:14:15.502Z] =================================================================================================================== 00:08:23.848 [2024-12-13T05:14:15.502Z] Total : 23762.14 92.82 0.00 0.00 5383.34 2340.57 11921.31 00:08:23.848 { 00:08:23.848 "results": [ 00:08:23.848 { 00:08:23.848 "job": "Nvme0n1", 00:08:23.848 "core_mask": "0x2", 00:08:23.848 "workload": "randwrite", 00:08:23.848 "status": "finished", 00:08:23.848 "queue_depth": 128, 00:08:23.848 "io_size": 4096, 00:08:23.848 "runtime": 10.00373, 00:08:23.848 "iops": 23762.13672300232, 00:08:23.848 "mibps": 92.82084657422782, 00:08:23.848 "io_failed": 0, 00:08:23.848 "io_timeout": 0, 00:08:23.848 "avg_latency_us": 5383.336374317646, 00:08:23.848 "min_latency_us": 2340.5714285714284, 00:08:23.848 "max_latency_us": 11921.310476190476 00:08:23.848 } 00:08:23.848 ], 00:08:23.848 "core_count": 1 00:08:23.848 } 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 820525 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 820525 ']' 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 820525 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 820525 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 820525' 00:08:23.848 killing process with pid 820525 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 820525 00:08:23.848 Received shutdown signal, test time was about 10.000000 seconds 00:08:23.848 00:08:23.848 Latency(us) 00:08:23.848 [2024-12-13T05:14:15.502Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:23.848 [2024-12-13T05:14:15.502Z] =================================================================================================================== 00:08:23.848 [2024-12-13T05:14:15.502Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:23.848 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 820525 00:08:24.107 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.366 06:14:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:24.624 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:24.624 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:24.624 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:24.624 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:24.624 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:24.883 [2024-12-13 06:14:16.398058] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:24.883 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:25.142 request: 00:08:25.142 { 00:08:25.142 "uuid": "78a53791-e93d-4166-85fb-e2e67a32ab7d", 00:08:25.142 "method": "bdev_lvol_get_lvstores", 00:08:25.142 "req_id": 1 00:08:25.142 } 00:08:25.142 Got JSON-RPC error response 00:08:25.142 response: 00:08:25.142 { 00:08:25.142 "code": -19, 00:08:25.142 "message": "No such device" 00:08:25.142 } 00:08:25.142 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:25.142 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:25.142 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:25.142 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:25.142 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.401 aio_bdev 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev a6a2fed1-88c8-4637-b030-41787723e0d1 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=a6a2fed1-88c8-4637-b030-41787723e0d1 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:25.401 06:14:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:25.401 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a6a2fed1-88c8-4637-b030-41787723e0d1 -t 2000 00:08:25.660 [ 00:08:25.660 { 00:08:25.660 "name": "a6a2fed1-88c8-4637-b030-41787723e0d1", 00:08:25.660 "aliases": [ 00:08:25.660 "lvs/lvol" 00:08:25.660 ], 00:08:25.660 "product_name": "Logical Volume", 00:08:25.660 "block_size": 4096, 00:08:25.660 "num_blocks": 38912, 00:08:25.660 "uuid": "a6a2fed1-88c8-4637-b030-41787723e0d1", 00:08:25.660 "assigned_rate_limits": { 00:08:25.660 "rw_ios_per_sec": 0, 00:08:25.660 "rw_mbytes_per_sec": 0, 00:08:25.660 "r_mbytes_per_sec": 0, 00:08:25.660 "w_mbytes_per_sec": 0 00:08:25.660 }, 00:08:25.660 "claimed": false, 00:08:25.660 "zoned": false, 00:08:25.660 "supported_io_types": { 00:08:25.660 "read": true, 00:08:25.660 "write": true, 00:08:25.660 "unmap": true, 00:08:25.660 "flush": false, 00:08:25.660 "reset": true, 00:08:25.660 "nvme_admin": false, 00:08:25.660 "nvme_io": false, 00:08:25.660 "nvme_io_md": false, 00:08:25.660 "write_zeroes": true, 00:08:25.660 "zcopy": false, 00:08:25.660 "get_zone_info": false, 00:08:25.660 "zone_management": false, 00:08:25.660 "zone_append": false, 00:08:25.660 "compare": false, 00:08:25.660 "compare_and_write": false, 00:08:25.660 "abort": false, 00:08:25.660 "seek_hole": true, 00:08:25.660 "seek_data": true, 00:08:25.660 "copy": false, 00:08:25.660 "nvme_iov_md": false 00:08:25.660 }, 00:08:25.660 "driver_specific": { 00:08:25.660 "lvol": { 00:08:25.660 "lvol_store_uuid": "78a53791-e93d-4166-85fb-e2e67a32ab7d", 00:08:25.660 "base_bdev": "aio_bdev", 00:08:25.660 "thin_provision": false, 00:08:25.660 "num_allocated_clusters": 38, 00:08:25.660 "snapshot": false, 00:08:25.660 "clone": false, 00:08:25.660 "esnap_clone": false 00:08:25.660 } 00:08:25.660 } 00:08:25.660 } 00:08:25.660 ] 00:08:25.660 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:25.660 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:25.660 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:25.919 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:25.919 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:25.919 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:26.178 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:26.178 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a6a2fed1-88c8-4637-b030-41787723e0d1 00:08:26.178 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 78a53791-e93d-4166-85fb-e2e67a32ab7d 00:08:26.436 06:14:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.695 00:08:26.695 real 0m15.639s 00:08:26.695 user 0m15.222s 00:08:26.695 sys 0m1.451s 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.695 ************************************ 00:08:26.695 END TEST lvs_grow_clean 00:08:26.695 ************************************ 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:26.695 ************************************ 00:08:26.695 START TEST lvs_grow_dirty 00:08:26.695 ************************************ 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:26.695 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:26.954 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:26.954 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:27.213 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:27.213 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:27.213 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:27.471 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:27.471 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:27.471 06:14:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 417a9e93-57ec-405a-ac1f-e4384758ff2b lvol 150 00:08:27.471 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e20069b8-9399-4288-a168-20bf756cf183 00:08:27.471 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:27.471 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:27.730 [2024-12-13 06:14:19.245337] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:27.730 [2024-12-13 06:14:19.245387] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:27.730 true 00:08:27.730 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:27.730 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:27.989 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:27.989 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:27.989 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e20069b8-9399-4288-a168-20bf756cf183 00:08:28.248 06:14:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:28.506 [2024-12-13 06:14:19.987499] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:28.506 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=823173 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 823173 /var/tmp/bdevperf.sock 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 823173 ']' 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:28.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:28.764 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:28.764 [2024-12-13 06:14:20.266439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:28.764 [2024-12-13 06:14:20.266498] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid823173 ] 00:08:28.764 [2024-12-13 06:14:20.343856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.764 [2024-12-13 06:14:20.366467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.022 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.022 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:29.022 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:29.280 Nvme0n1 00:08:29.280 06:14:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:29.538 [ 00:08:29.538 { 00:08:29.538 "name": "Nvme0n1", 00:08:29.538 "aliases": [ 00:08:29.538 "e20069b8-9399-4288-a168-20bf756cf183" 00:08:29.538 ], 00:08:29.538 "product_name": "NVMe disk", 00:08:29.538 "block_size": 4096, 00:08:29.538 "num_blocks": 38912, 00:08:29.538 "uuid": "e20069b8-9399-4288-a168-20bf756cf183", 00:08:29.538 "numa_id": 1, 00:08:29.538 "assigned_rate_limits": { 00:08:29.538 "rw_ios_per_sec": 0, 00:08:29.538 "rw_mbytes_per_sec": 0, 00:08:29.538 "r_mbytes_per_sec": 0, 00:08:29.538 "w_mbytes_per_sec": 0 00:08:29.538 }, 00:08:29.538 "claimed": false, 00:08:29.538 "zoned": false, 00:08:29.538 "supported_io_types": { 00:08:29.538 "read": true, 00:08:29.538 "write": true, 00:08:29.538 "unmap": true, 00:08:29.538 "flush": true, 00:08:29.538 "reset": true, 00:08:29.538 "nvme_admin": true, 00:08:29.538 "nvme_io": true, 00:08:29.538 "nvme_io_md": false, 00:08:29.538 "write_zeroes": true, 00:08:29.538 "zcopy": false, 00:08:29.538 "get_zone_info": false, 00:08:29.538 "zone_management": false, 00:08:29.538 "zone_append": false, 00:08:29.538 "compare": true, 00:08:29.538 "compare_and_write": true, 00:08:29.538 "abort": true, 00:08:29.538 "seek_hole": false, 00:08:29.538 "seek_data": false, 00:08:29.538 "copy": true, 00:08:29.538 "nvme_iov_md": false 00:08:29.538 }, 00:08:29.538 "memory_domains": [ 00:08:29.538 { 00:08:29.538 "dma_device_id": "system", 00:08:29.538 "dma_device_type": 1 00:08:29.538 } 00:08:29.538 ], 00:08:29.538 "driver_specific": { 00:08:29.538 "nvme": [ 00:08:29.538 { 00:08:29.538 "trid": { 00:08:29.538 "trtype": "TCP", 00:08:29.538 "adrfam": "IPv4", 00:08:29.538 "traddr": "10.0.0.2", 00:08:29.538 "trsvcid": "4420", 00:08:29.538 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:29.538 }, 00:08:29.538 "ctrlr_data": { 00:08:29.539 "cntlid": 1, 00:08:29.539 "vendor_id": "0x8086", 00:08:29.539 "model_number": "SPDK bdev Controller", 00:08:29.539 "serial_number": "SPDK0", 00:08:29.539 "firmware_revision": "25.01", 00:08:29.539 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:29.539 "oacs": { 00:08:29.539 "security": 0, 00:08:29.539 "format": 0, 00:08:29.539 "firmware": 0, 00:08:29.539 "ns_manage": 0 00:08:29.539 }, 00:08:29.539 "multi_ctrlr": true, 00:08:29.539 "ana_reporting": false 00:08:29.539 }, 00:08:29.539 "vs": { 00:08:29.539 "nvme_version": "1.3" 00:08:29.539 }, 00:08:29.539 "ns_data": { 00:08:29.539 "id": 1, 00:08:29.539 "can_share": true 00:08:29.539 } 00:08:29.539 } 00:08:29.539 ], 00:08:29.539 "mp_policy": "active_passive" 00:08:29.539 } 00:08:29.539 } 00:08:29.539 ] 00:08:29.539 06:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=823394 00:08:29.539 06:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:29.539 06:14:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.539 Running I/O for 10 seconds... 00:08:30.474 Latency(us) 00:08:30.474 [2024-12-13T05:14:22.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:30.474 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.474 Nvme0n1 : 1.00 23364.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:30.474 [2024-12-13T05:14:22.128Z] =================================================================================================================== 00:08:30.474 [2024-12-13T05:14:22.128Z] Total : 23364.00 91.27 0.00 0.00 0.00 0.00 0.00 00:08:30.474 00:08:31.411 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:31.670 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.670 Nvme0n1 : 2.00 23603.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:31.670 [2024-12-13T05:14:23.324Z] =================================================================================================================== 00:08:31.670 [2024-12-13T05:14:23.324Z] Total : 23603.00 92.20 0.00 0.00 0.00 0.00 0.00 00:08:31.670 00:08:31.670 true 00:08:31.670 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:31.670 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:31.929 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:31.929 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:31.929 06:14:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 823394 00:08:32.496 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.496 Nvme0n1 : 3.00 23653.00 92.39 0.00 0.00 0.00 0.00 0.00 00:08:32.496 [2024-12-13T05:14:24.150Z] =================================================================================================================== 00:08:32.496 [2024-12-13T05:14:24.150Z] Total : 23653.00 92.39 0.00 0.00 0.00 0.00 0.00 00:08:32.496 00:08:33.873 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.873 Nvme0n1 : 4.00 23715.50 92.64 0.00 0.00 0.00 0.00 0.00 00:08:33.873 [2024-12-13T05:14:25.527Z] =================================================================================================================== 00:08:33.873 [2024-12-13T05:14:25.527Z] Total : 23715.50 92.64 0.00 0.00 0.00 0.00 0.00 00:08:33.873 00:08:34.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.809 Nvme0n1 : 5.00 23768.60 92.85 0.00 0.00 0.00 0.00 0.00 00:08:34.809 [2024-12-13T05:14:26.463Z] =================================================================================================================== 00:08:34.809 [2024-12-13T05:14:26.463Z] Total : 23768.60 92.85 0.00 0.00 0.00 0.00 0.00 00:08:34.809 00:08:35.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.745 Nvme0n1 : 6.00 23813.67 93.02 0.00 0.00 0.00 0.00 0.00 00:08:35.745 [2024-12-13T05:14:27.399Z] =================================================================================================================== 00:08:35.745 [2024-12-13T05:14:27.399Z] Total : 23813.67 93.02 0.00 0.00 0.00 0.00 0.00 00:08:35.745 00:08:36.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.864 Nvme0n1 : 7.00 23822.86 93.06 0.00 0.00 0.00 0.00 0.00 00:08:36.864 [2024-12-13T05:14:28.518Z] =================================================================================================================== 00:08:36.864 [2024-12-13T05:14:28.518Z] Total : 23822.86 93.06 0.00 0.00 0.00 0.00 0.00 00:08:36.864 00:08:37.846 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.846 Nvme0n1 : 8.00 23833.25 93.10 0.00 0.00 0.00 0.00 0.00 00:08:37.846 [2024-12-13T05:14:29.500Z] =================================================================================================================== 00:08:37.846 [2024-12-13T05:14:29.500Z] Total : 23833.25 93.10 0.00 0.00 0.00 0.00 0.00 00:08:37.846 00:08:38.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.782 Nvme0n1 : 9.00 23832.67 93.10 0.00 0.00 0.00 0.00 0.00 00:08:38.782 [2024-12-13T05:14:30.436Z] =================================================================================================================== 00:08:38.782 [2024-12-13T05:14:30.436Z] Total : 23832.67 93.10 0.00 0.00 0.00 0.00 0.00 00:08:38.782 00:08:39.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.717 Nvme0n1 : 10.00 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:39.717 [2024-12-13T05:14:31.371Z] =================================================================================================================== 00:08:39.717 [2024-12-13T05:14:31.371Z] Total : 23815.00 93.03 0.00 0.00 0.00 0.00 0.00 00:08:39.717 00:08:39.717 00:08:39.717 Latency(us) 00:08:39.717 [2024-12-13T05:14:31.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.717 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.717 Nvme0n1 : 10.00 23820.04 93.05 0.00 0.00 5370.69 3151.97 15229.32 00:08:39.717 [2024-12-13T05:14:31.371Z] =================================================================================================================== 00:08:39.717 [2024-12-13T05:14:31.372Z] Total : 23820.04 93.05 0.00 0.00 5370.69 3151.97 15229.32 00:08:39.718 { 00:08:39.718 "results": [ 00:08:39.718 { 00:08:39.718 "job": "Nvme0n1", 00:08:39.718 "core_mask": "0x2", 00:08:39.718 "workload": "randwrite", 00:08:39.718 "status": "finished", 00:08:39.718 "queue_depth": 128, 00:08:39.718 "io_size": 4096, 00:08:39.718 "runtime": 10.003258, 00:08:39.718 "iops": 23820.03943115333, 00:08:39.718 "mibps": 93.0470290279427, 00:08:39.718 "io_failed": 0, 00:08:39.718 "io_timeout": 0, 00:08:39.718 "avg_latency_us": 5370.692565506717, 00:08:39.718 "min_latency_us": 3151.9695238095237, 00:08:39.718 "max_latency_us": 15229.318095238095 00:08:39.718 } 00:08:39.718 ], 00:08:39.718 "core_count": 1 00:08:39.718 } 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 823173 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 823173 ']' 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 823173 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 823173 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 823173' 00:08:39.718 killing process with pid 823173 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 823173 00:08:39.718 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.718 00:08:39.718 Latency(us) 00:08:39.718 [2024-12-13T05:14:31.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.718 [2024-12-13T05:14:31.372Z] =================================================================================================================== 00:08:39.718 [2024-12-13T05:14:31.372Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 823173 00:08:39.718 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:39.976 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:40.234 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:40.234 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:40.492 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:40.493 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:40.493 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 820145 00:08:40.493 06:14:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 820145 00:08:40.493 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 820145 Killed "${NVMF_APP[@]}" "$@" 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=825211 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 825211 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 825211 ']' 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.493 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.493 [2024-12-13 06:14:32.088239] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:40.493 [2024-12-13 06:14:32.088283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.751 [2024-12-13 06:14:32.167840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.751 [2024-12-13 06:14:32.188618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:40.751 [2024-12-13 06:14:32.188653] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:40.751 [2024-12-13 06:14:32.188660] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:40.751 [2024-12-13 06:14:32.188667] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:40.751 [2024-12-13 06:14:32.188672] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:40.751 [2024-12-13 06:14:32.189140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:40.751 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:41.010 [2024-12-13 06:14:32.480983] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:41.010 [2024-12-13 06:14:32.481060] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:41.010 [2024-12-13 06:14:32.481083] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e20069b8-9399-4288-a168-20bf756cf183 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e20069b8-9399-4288-a168-20bf756cf183 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:41.010 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:41.268 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e20069b8-9399-4288-a168-20bf756cf183 -t 2000 00:08:41.268 [ 00:08:41.268 { 00:08:41.268 "name": "e20069b8-9399-4288-a168-20bf756cf183", 00:08:41.268 "aliases": [ 00:08:41.268 "lvs/lvol" 00:08:41.268 ], 00:08:41.268 "product_name": "Logical Volume", 00:08:41.268 "block_size": 4096, 00:08:41.268 "num_blocks": 38912, 00:08:41.268 "uuid": "e20069b8-9399-4288-a168-20bf756cf183", 00:08:41.268 "assigned_rate_limits": { 00:08:41.268 "rw_ios_per_sec": 0, 00:08:41.268 "rw_mbytes_per_sec": 0, 00:08:41.268 "r_mbytes_per_sec": 0, 00:08:41.268 "w_mbytes_per_sec": 0 00:08:41.268 }, 00:08:41.268 "claimed": false, 00:08:41.268 "zoned": false, 00:08:41.268 "supported_io_types": { 00:08:41.268 "read": true, 00:08:41.268 "write": true, 00:08:41.268 "unmap": true, 00:08:41.268 "flush": false, 00:08:41.268 "reset": true, 00:08:41.268 "nvme_admin": false, 00:08:41.268 "nvme_io": false, 00:08:41.268 "nvme_io_md": false, 00:08:41.268 "write_zeroes": true, 00:08:41.268 "zcopy": false, 00:08:41.268 "get_zone_info": false, 00:08:41.268 "zone_management": false, 00:08:41.268 "zone_append": false, 00:08:41.268 "compare": false, 00:08:41.268 "compare_and_write": false, 00:08:41.268 "abort": false, 00:08:41.268 "seek_hole": true, 00:08:41.268 "seek_data": true, 00:08:41.268 "copy": false, 00:08:41.268 "nvme_iov_md": false 00:08:41.268 }, 00:08:41.268 "driver_specific": { 00:08:41.268 "lvol": { 00:08:41.268 "lvol_store_uuid": "417a9e93-57ec-405a-ac1f-e4384758ff2b", 00:08:41.268 "base_bdev": "aio_bdev", 00:08:41.268 "thin_provision": false, 00:08:41.268 "num_allocated_clusters": 38, 00:08:41.268 "snapshot": false, 00:08:41.268 "clone": false, 00:08:41.268 "esnap_clone": false 00:08:41.268 } 00:08:41.268 } 00:08:41.268 } 00:08:41.268 ] 00:08:41.268 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:41.268 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:41.268 06:14:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:41.527 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:41.527 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:41.527 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:41.785 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:41.785 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:42.043 [2024-12-13 06:14:33.441915] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:42.043 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:42.044 request: 00:08:42.044 { 00:08:42.044 "uuid": "417a9e93-57ec-405a-ac1f-e4384758ff2b", 00:08:42.044 "method": "bdev_lvol_get_lvstores", 00:08:42.044 "req_id": 1 00:08:42.044 } 00:08:42.044 Got JSON-RPC error response 00:08:42.044 response: 00:08:42.044 { 00:08:42.044 "code": -19, 00:08:42.044 "message": "No such device" 00:08:42.044 } 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:42.044 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:42.302 aio_bdev 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e20069b8-9399-4288-a168-20bf756cf183 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=e20069b8-9399-4288-a168-20bf756cf183 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.302 06:14:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:42.560 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e20069b8-9399-4288-a168-20bf756cf183 -t 2000 00:08:42.560 [ 00:08:42.560 { 00:08:42.560 "name": "e20069b8-9399-4288-a168-20bf756cf183", 00:08:42.560 "aliases": [ 00:08:42.560 "lvs/lvol" 00:08:42.560 ], 00:08:42.560 "product_name": "Logical Volume", 00:08:42.560 "block_size": 4096, 00:08:42.560 "num_blocks": 38912, 00:08:42.560 "uuid": "e20069b8-9399-4288-a168-20bf756cf183", 00:08:42.560 "assigned_rate_limits": { 00:08:42.560 "rw_ios_per_sec": 0, 00:08:42.560 "rw_mbytes_per_sec": 0, 00:08:42.560 "r_mbytes_per_sec": 0, 00:08:42.560 "w_mbytes_per_sec": 0 00:08:42.560 }, 00:08:42.560 "claimed": false, 00:08:42.560 "zoned": false, 00:08:42.560 "supported_io_types": { 00:08:42.560 "read": true, 00:08:42.560 "write": true, 00:08:42.560 "unmap": true, 00:08:42.560 "flush": false, 00:08:42.560 "reset": true, 00:08:42.560 "nvme_admin": false, 00:08:42.560 "nvme_io": false, 00:08:42.560 "nvme_io_md": false, 00:08:42.560 "write_zeroes": true, 00:08:42.560 "zcopy": false, 00:08:42.560 "get_zone_info": false, 00:08:42.560 "zone_management": false, 00:08:42.560 "zone_append": false, 00:08:42.560 "compare": false, 00:08:42.560 "compare_and_write": false, 00:08:42.560 "abort": false, 00:08:42.560 "seek_hole": true, 00:08:42.560 "seek_data": true, 00:08:42.560 "copy": false, 00:08:42.560 "nvme_iov_md": false 00:08:42.560 }, 00:08:42.560 "driver_specific": { 00:08:42.560 "lvol": { 00:08:42.560 "lvol_store_uuid": "417a9e93-57ec-405a-ac1f-e4384758ff2b", 00:08:42.560 "base_bdev": "aio_bdev", 00:08:42.560 "thin_provision": false, 00:08:42.560 "num_allocated_clusters": 38, 00:08:42.560 "snapshot": false, 00:08:42.560 "clone": false, 00:08:42.560 "esnap_clone": false 00:08:42.560 } 00:08:42.560 } 00:08:42.560 } 00:08:42.560 ] 00:08:42.560 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:42.560 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:42.560 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:42.818 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:42.818 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:42.818 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:43.077 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:43.077 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e20069b8-9399-4288-a168-20bf756cf183 00:08:43.335 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 417a9e93-57ec-405a-ac1f-e4384758ff2b 00:08:43.335 06:14:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:43.593 00:08:43.593 real 0m16.881s 00:08:43.593 user 0m43.739s 00:08:43.593 sys 0m3.791s 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:43.593 ************************************ 00:08:43.593 END TEST lvs_grow_dirty 00:08:43.593 ************************************ 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:43.593 nvmf_trace.0 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:43.593 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:43.851 rmmod nvme_tcp 00:08:43.851 rmmod nvme_fabrics 00:08:43.851 rmmod nvme_keyring 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 825211 ']' 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 825211 ']' 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 825211' 00:08:43.852 killing process with pid 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 825211 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:43.852 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:44.110 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:44.110 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:44.110 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:44.110 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:44.110 06:14:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:46.014 00:08:46.014 real 0m41.739s 00:08:46.014 user 1m4.532s 00:08:46.014 sys 0m10.110s 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:46.014 ************************************ 00:08:46.014 END TEST nvmf_lvs_grow 00:08:46.014 ************************************ 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.014 ************************************ 00:08:46.014 START TEST nvmf_bdev_io_wait 00:08:46.014 ************************************ 00:08:46.014 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:46.274 * Looking for test storage... 00:08:46.274 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.274 --rc genhtml_branch_coverage=1 00:08:46.274 --rc genhtml_function_coverage=1 00:08:46.274 --rc genhtml_legend=1 00:08:46.274 --rc geninfo_all_blocks=1 00:08:46.274 --rc geninfo_unexecuted_blocks=1 00:08:46.274 00:08:46.274 ' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.274 --rc genhtml_branch_coverage=1 00:08:46.274 --rc genhtml_function_coverage=1 00:08:46.274 --rc genhtml_legend=1 00:08:46.274 --rc geninfo_all_blocks=1 00:08:46.274 --rc geninfo_unexecuted_blocks=1 00:08:46.274 00:08:46.274 ' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.274 --rc genhtml_branch_coverage=1 00:08:46.274 --rc genhtml_function_coverage=1 00:08:46.274 --rc genhtml_legend=1 00:08:46.274 --rc geninfo_all_blocks=1 00:08:46.274 --rc geninfo_unexecuted_blocks=1 00:08:46.274 00:08:46.274 ' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:46.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.274 --rc genhtml_branch_coverage=1 00:08:46.274 --rc genhtml_function_coverage=1 00:08:46.274 --rc genhtml_legend=1 00:08:46.274 --rc geninfo_all_blocks=1 00:08:46.274 --rc geninfo_unexecuted_blocks=1 00:08:46.274 00:08:46.274 ' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.274 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:46.275 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:08:46.275 06:14:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:52.843 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:52.843 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:52.843 Found net devices under 0000:af:00.0: cvl_0_0 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:52.843 Found net devices under 0000:af:00.1: cvl_0_1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:52.843 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:52.843 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:52.843 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:08:52.843 00:08:52.843 --- 10.0.0.2 ping statistics --- 00:08:52.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.844 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:52.844 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:52.844 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:08:52.844 00:08:52.844 --- 10.0.0.1 ping statistics --- 00:08:52.844 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:52.844 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=829195 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 829195 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 829195 ']' 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:52.844 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 [2024-12-13 06:14:43.843664] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:52.844 [2024-12-13 06:14:43.843708] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:52.844 [2024-12-13 06:14:43.920251] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:52.844 [2024-12-13 06:14:43.944538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:52.844 [2024-12-13 06:14:43.944574] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:52.844 [2024-12-13 06:14:43.944581] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:52.844 [2024-12-13 06:14:43.944588] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:52.844 [2024-12-13 06:14:43.944594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:52.844 [2024-12-13 06:14:43.945913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:52.844 [2024-12-13 06:14:43.946023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:52.844 [2024-12-13 06:14:43.946103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.844 [2024-12-13 06:14:43.946105] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:52.844 06:14:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 [2024-12-13 06:14:44.101914] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 Malloc0 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:52.844 [2024-12-13 06:14:44.149124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=829378 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=829381 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.844 { 00:08:52.844 "params": { 00:08:52.844 "name": "Nvme$subsystem", 00:08:52.844 "trtype": "$TEST_TRANSPORT", 00:08:52.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.844 "adrfam": "ipv4", 00:08:52.844 "trsvcid": "$NVMF_PORT", 00:08:52.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.844 "hdgst": ${hdgst:-false}, 00:08:52.844 "ddgst": ${ddgst:-false} 00:08:52.844 }, 00:08:52.844 "method": "bdev_nvme_attach_controller" 00:08:52.844 } 00:08:52.844 EOF 00:08:52.844 )") 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=829384 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:52.844 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.844 { 00:08:52.844 "params": { 00:08:52.844 "name": "Nvme$subsystem", 00:08:52.844 "trtype": "$TEST_TRANSPORT", 00:08:52.844 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.844 "adrfam": "ipv4", 00:08:52.844 "trsvcid": "$NVMF_PORT", 00:08:52.844 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.844 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.844 "hdgst": ${hdgst:-false}, 00:08:52.844 "ddgst": ${ddgst:-false} 00:08:52.844 }, 00:08:52.844 "method": "bdev_nvme_attach_controller" 00:08:52.844 } 00:08:52.844 EOF 00:08:52.845 )") 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=829388 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.845 { 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme$subsystem", 00:08:52.845 "trtype": "$TEST_TRANSPORT", 00:08:52.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "$NVMF_PORT", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.845 "hdgst": ${hdgst:-false}, 00:08:52.845 "ddgst": ${ddgst:-false} 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 } 00:08:52.845 EOF 00:08:52.845 )") 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:08:52.845 { 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme$subsystem", 00:08:52.845 "trtype": "$TEST_TRANSPORT", 00:08:52.845 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "$NVMF_PORT", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:52.845 "hdgst": ${hdgst:-false}, 00:08:52.845 "ddgst": ${ddgst:-false} 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 } 00:08:52.845 EOF 00:08:52.845 )") 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 829378 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme1", 00:08:52.845 "trtype": "tcp", 00:08:52.845 "traddr": "10.0.0.2", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "4420", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.845 "hdgst": false, 00:08:52.845 "ddgst": false 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 }' 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme1", 00:08:52.845 "trtype": "tcp", 00:08:52.845 "traddr": "10.0.0.2", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "4420", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.845 "hdgst": false, 00:08:52.845 "ddgst": false 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 }' 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme1", 00:08:52.845 "trtype": "tcp", 00:08:52.845 "traddr": "10.0.0.2", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "4420", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.845 "hdgst": false, 00:08:52.845 "ddgst": false 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 }' 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:08:52.845 06:14:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:08:52.845 "params": { 00:08:52.845 "name": "Nvme1", 00:08:52.845 "trtype": "tcp", 00:08:52.845 "traddr": "10.0.0.2", 00:08:52.845 "adrfam": "ipv4", 00:08:52.845 "trsvcid": "4420", 00:08:52.845 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:52.845 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:52.845 "hdgst": false, 00:08:52.845 "ddgst": false 00:08:52.845 }, 00:08:52.845 "method": "bdev_nvme_attach_controller" 00:08:52.845 }' 00:08:52.845 [2024-12-13 06:14:44.200808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:52.845 [2024-12-13 06:14:44.200861] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:52.845 [2024-12-13 06:14:44.201522] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:52.845 [2024-12-13 06:14:44.201568] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:52.845 [2024-12-13 06:14:44.204639] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:52.845 [2024-12-13 06:14:44.204684] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:52.845 [2024-12-13 06:14:44.206526] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:08:52.845 [2024-12-13 06:14:44.206568] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:52.845 [2024-12-13 06:14:44.392576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.845 [2024-12-13 06:14:44.409769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:08:52.845 [2024-12-13 06:14:44.485254] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.104 [2024-12-13 06:14:44.508620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:08:53.104 [2024-12-13 06:14:44.539960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.104 [2024-12-13 06:14:44.556852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:08:53.104 [2024-12-13 06:14:44.587039] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.104 [2024-12-13 06:14:44.603043] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:08:53.104 Running I/O for 1 seconds... 00:08:53.362 Running I/O for 1 seconds... 00:08:53.362 Running I/O for 1 seconds... 00:08:53.362 Running I/O for 1 seconds... 00:08:54.303 243056.00 IOPS, 949.44 MiB/s 00:08:54.303 Latency(us) 00:08:54.303 [2024-12-13T05:14:45.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.303 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:54.303 Nvme1n1 : 1.00 242690.01 948.01 0.00 0.00 524.21 226.26 1490.16 00:08:54.303 [2024-12-13T05:14:45.957Z] =================================================================================================================== 00:08:54.303 [2024-12-13T05:14:45.957Z] Total : 242690.01 948.01 0.00 0.00 524.21 226.26 1490.16 00:08:54.303 6142.00 IOPS, 23.99 MiB/s 00:08:54.303 Latency(us) 00:08:54.303 [2024-12-13T05:14:45.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.303 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:54.303 Nvme1n1 : 1.02 6158.03 24.05 0.00 0.00 20563.40 7302.58 28711.01 00:08:54.303 [2024-12-13T05:14:45.957Z] =================================================================================================================== 00:08:54.303 [2024-12-13T05:14:45.957Z] Total : 6158.03 24.05 0.00 0.00 20563.40 7302.58 28711.01 00:08:54.303 13916.00 IOPS, 54.36 MiB/s 00:08:54.303 Latency(us) 00:08:54.303 [2024-12-13T05:14:45.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.303 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:54.303 Nvme1n1 : 1.01 13975.23 54.59 0.00 0.00 9132.21 4244.24 18225.25 00:08:54.303 [2024-12-13T05:14:45.957Z] =================================================================================================================== 00:08:54.303 [2024-12-13T05:14:45.957Z] Total : 13975.23 54.59 0.00 0.00 9132.21 4244.24 18225.25 00:08:54.303 6014.00 IOPS, 23.49 MiB/s 00:08:54.303 Latency(us) 00:08:54.303 [2024-12-13T05:14:45.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:54.303 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:54.303 Nvme1n1 : 1.01 6109.18 23.86 0.00 0.00 20887.98 4462.69 42192.70 00:08:54.303 [2024-12-13T05:14:45.957Z] =================================================================================================================== 00:08:54.303 [2024-12-13T05:14:45.957Z] Total : 6109.18 23.86 0.00 0.00 20887.98 4462.69 42192.70 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 829381 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 829384 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 829388 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:54.562 06:14:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:54.562 rmmod nvme_tcp 00:08:54.562 rmmod nvme_fabrics 00:08:54.562 rmmod nvme_keyring 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 829195 ']' 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 829195 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 829195 ']' 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 829195 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 829195 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 829195' 00:08:54.562 killing process with pid 829195 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 829195 00:08:54.562 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 829195 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:54.821 06:14:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:56.726 00:08:56.726 real 0m10.686s 00:08:56.726 user 0m16.063s 00:08:56.726 sys 0m6.015s 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:56.726 ************************************ 00:08:56.726 END TEST nvmf_bdev_io_wait 00:08:56.726 ************************************ 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.726 06:14:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:56.985 ************************************ 00:08:56.985 START TEST nvmf_queue_depth 00:08:56.985 ************************************ 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:56.986 * Looking for test storage... 00:08:56.986 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.986 --rc genhtml_branch_coverage=1 00:08:56.986 --rc genhtml_function_coverage=1 00:08:56.986 --rc genhtml_legend=1 00:08:56.986 --rc geninfo_all_blocks=1 00:08:56.986 --rc geninfo_unexecuted_blocks=1 00:08:56.986 00:08:56.986 ' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.986 --rc genhtml_branch_coverage=1 00:08:56.986 --rc genhtml_function_coverage=1 00:08:56.986 --rc genhtml_legend=1 00:08:56.986 --rc geninfo_all_blocks=1 00:08:56.986 --rc geninfo_unexecuted_blocks=1 00:08:56.986 00:08:56.986 ' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.986 --rc genhtml_branch_coverage=1 00:08:56.986 --rc genhtml_function_coverage=1 00:08:56.986 --rc genhtml_legend=1 00:08:56.986 --rc geninfo_all_blocks=1 00:08:56.986 --rc geninfo_unexecuted_blocks=1 00:08:56.986 00:08:56.986 ' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:56.986 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:56.986 --rc genhtml_branch_coverage=1 00:08:56.986 --rc genhtml_function_coverage=1 00:08:56.986 --rc genhtml_legend=1 00:08:56.986 --rc geninfo_all_blocks=1 00:08:56.986 --rc geninfo_unexecuted_blocks=1 00:08:56.986 00:08:56.986 ' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:56.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:56.986 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:08:56.987 06:14:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.554 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:03.555 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:03.555 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:03.555 Found net devices under 0000:af:00.0: cvl_0_0 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:03.555 Found net devices under 0000:af:00.1: cvl_0_1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:03.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:03.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.407 ms 00:09:03.555 00:09:03.555 --- 10.0.0.2 ping statistics --- 00:09:03.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.555 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:03.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:03.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:09:03.555 00:09:03.555 --- 10.0.0.1 ping statistics --- 00:09:03.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:03.555 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:03.555 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=833159 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 833159 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833159 ']' 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:03.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 [2024-12-13 06:14:54.683697] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.556 [2024-12-13 06:14:54.683741] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:03.556 [2024-12-13 06:14:54.762340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.556 [2024-12-13 06:14:54.783506] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:03.556 [2024-12-13 06:14:54.783543] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:03.556 [2024-12-13 06:14:54.783550] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:03.556 [2024-12-13 06:14:54.783555] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:03.556 [2024-12-13 06:14:54.783560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:03.556 [2024-12-13 06:14:54.784016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 [2024-12-13 06:14:54.914593] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 Malloc0 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 [2024-12-13 06:14:54.964590] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=833185 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 833185 /var/tmp/bdevperf.sock 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 833185 ']' 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:03.556 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:03.556 06:14:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.556 [2024-12-13 06:14:55.014959] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:03.556 [2024-12-13 06:14:55.015000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid833185 ] 00:09:03.556 [2024-12-13 06:14:55.091799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:03.556 [2024-12-13 06:14:55.114464] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.556 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:03.556 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:03.556 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:03.556 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.556 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:03.815 NVMe0n1 00:09:03.815 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.815 06:14:55 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:04.080 Running I/O for 10 seconds... 00:09:05.954 12141.00 IOPS, 47.43 MiB/s [2024-12-13T05:14:58.545Z] 12279.00 IOPS, 47.96 MiB/s [2024-12-13T05:14:59.922Z] 12292.67 IOPS, 48.02 MiB/s [2024-12-13T05:15:00.859Z] 12438.00 IOPS, 48.59 MiB/s [2024-12-13T05:15:01.794Z] 12464.00 IOPS, 48.69 MiB/s [2024-12-13T05:15:02.730Z] 12471.50 IOPS, 48.72 MiB/s [2024-12-13T05:15:03.665Z] 12532.57 IOPS, 48.96 MiB/s [2024-12-13T05:15:04.601Z] 12523.75 IOPS, 48.92 MiB/s [2024-12-13T05:15:05.978Z] 12515.22 IOPS, 48.89 MiB/s [2024-12-13T05:15:05.978Z] 12515.60 IOPS, 48.89 MiB/s 00:09:14.324 Latency(us) 00:09:14.324 [2024-12-13T05:15:05.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.324 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:14.324 Verification LBA range: start 0x0 length 0x4000 00:09:14.324 NVMe0n1 : 10.05 12546.22 49.01 0.00 0.00 81326.99 11421.99 52179.14 00:09:14.324 [2024-12-13T05:15:05.978Z] =================================================================================================================== 00:09:14.324 [2024-12-13T05:15:05.978Z] Total : 12546.22 49.01 0.00 0.00 81326.99 11421.99 52179.14 00:09:14.324 { 00:09:14.324 "results": [ 00:09:14.324 { 00:09:14.324 "job": "NVMe0n1", 00:09:14.324 "core_mask": "0x1", 00:09:14.324 "workload": "verify", 00:09:14.324 "status": "finished", 00:09:14.324 "verify_range": { 00:09:14.324 "start": 0, 00:09:14.324 "length": 16384 00:09:14.324 }, 00:09:14.324 "queue_depth": 1024, 00:09:14.324 "io_size": 4096, 00:09:14.324 "runtime": 10.050356, 00:09:14.324 "iops": 12546.22224327178, 00:09:14.324 "mibps": 49.008680637780394, 00:09:14.324 "io_failed": 0, 00:09:14.324 "io_timeout": 0, 00:09:14.324 "avg_latency_us": 81326.98742449889, 00:09:14.324 "min_latency_us": 11421.988571428572, 00:09:14.324 "max_latency_us": 52179.13904761905 00:09:14.324 } 00:09:14.324 ], 00:09:14.324 "core_count": 1 00:09:14.324 } 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 833185 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833185 ']' 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833185 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.324 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833185 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833185' 00:09:14.325 killing process with pid 833185 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833185 00:09:14.325 Received shutdown signal, test time was about 10.000000 seconds 00:09:14.325 00:09:14.325 Latency(us) 00:09:14.325 [2024-12-13T05:15:05.979Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:14.325 [2024-12-13T05:15:05.979Z] =================================================================================================================== 00:09:14.325 [2024-12-13T05:15:05.979Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833185 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:14.325 rmmod nvme_tcp 00:09:14.325 rmmod nvme_fabrics 00:09:14.325 rmmod nvme_keyring 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 833159 ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 833159 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 833159 ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 833159 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 833159 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 833159' 00:09:14.325 killing process with pid 833159 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 833159 00:09:14.325 06:15:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 833159 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.584 06:15:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:16.659 00:09:16.659 real 0m19.778s 00:09:16.659 user 0m23.094s 00:09:16.659 sys 0m6.075s 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:16.659 ************************************ 00:09:16.659 END TEST nvmf_queue_depth 00:09:16.659 ************************************ 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.659 ************************************ 00:09:16.659 START TEST nvmf_target_multipath 00:09:16.659 ************************************ 00:09:16.659 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:16.919 * Looking for test storage... 00:09:16.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:16.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.919 --rc genhtml_branch_coverage=1 00:09:16.919 --rc genhtml_function_coverage=1 00:09:16.919 --rc genhtml_legend=1 00:09:16.919 --rc geninfo_all_blocks=1 00:09:16.919 --rc geninfo_unexecuted_blocks=1 00:09:16.919 00:09:16.919 ' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:16.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.919 --rc genhtml_branch_coverage=1 00:09:16.919 --rc genhtml_function_coverage=1 00:09:16.919 --rc genhtml_legend=1 00:09:16.919 --rc geninfo_all_blocks=1 00:09:16.919 --rc geninfo_unexecuted_blocks=1 00:09:16.919 00:09:16.919 ' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:16.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.919 --rc genhtml_branch_coverage=1 00:09:16.919 --rc genhtml_function_coverage=1 00:09:16.919 --rc genhtml_legend=1 00:09:16.919 --rc geninfo_all_blocks=1 00:09:16.919 --rc geninfo_unexecuted_blocks=1 00:09:16.919 00:09:16.919 ' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:16.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.919 --rc genhtml_branch_coverage=1 00:09:16.919 --rc genhtml_function_coverage=1 00:09:16.919 --rc genhtml_legend=1 00:09:16.919 --rc geninfo_all_blocks=1 00:09:16.919 --rc geninfo_unexecuted_blocks=1 00:09:16.919 00:09:16.919 ' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.919 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:16.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:16.920 06:15:08 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:23.491 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:23.491 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.491 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:23.492 Found net devices under 0000:af:00.0: cvl_0_0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:23.492 Found net devices under 0000:af:00.1: cvl_0_1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:23.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:09:23.492 00:09:23.492 --- 10.0.0.2 ping statistics --- 00:09:23.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.492 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:23.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.140 ms 00:09:23.492 00:09:23.492 --- 10.0.0.1 ping statistics --- 00:09:23.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.492 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:23.492 only one NIC for nvmf test 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:23.492 rmmod nvme_tcp 00:09:23.492 rmmod nvme_fabrics 00:09:23.492 rmmod nvme_keyring 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:23.492 06:15:14 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:25.397 00:09:25.397 real 0m8.373s 00:09:25.397 user 0m1.858s 00:09:25.397 sys 0m4.445s 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:25.397 ************************************ 00:09:25.397 END TEST nvmf_target_multipath 00:09:25.397 ************************************ 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.397 06:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.398 ************************************ 00:09:25.398 START TEST nvmf_zcopy 00:09:25.398 ************************************ 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:25.398 * Looking for test storage... 00:09:25.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:25.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.398 --rc genhtml_branch_coverage=1 00:09:25.398 --rc genhtml_function_coverage=1 00:09:25.398 --rc genhtml_legend=1 00:09:25.398 --rc geninfo_all_blocks=1 00:09:25.398 --rc geninfo_unexecuted_blocks=1 00:09:25.398 00:09:25.398 ' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:25.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.398 --rc genhtml_branch_coverage=1 00:09:25.398 --rc genhtml_function_coverage=1 00:09:25.398 --rc genhtml_legend=1 00:09:25.398 --rc geninfo_all_blocks=1 00:09:25.398 --rc geninfo_unexecuted_blocks=1 00:09:25.398 00:09:25.398 ' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:25.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.398 --rc genhtml_branch_coverage=1 00:09:25.398 --rc genhtml_function_coverage=1 00:09:25.398 --rc genhtml_legend=1 00:09:25.398 --rc geninfo_all_blocks=1 00:09:25.398 --rc geninfo_unexecuted_blocks=1 00:09:25.398 00:09:25.398 ' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:25.398 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.398 --rc genhtml_branch_coverage=1 00:09:25.398 --rc genhtml_function_coverage=1 00:09:25.398 --rc genhtml_legend=1 00:09:25.398 --rc geninfo_all_blocks=1 00:09:25.398 --rc geninfo_unexecuted_blocks=1 00:09:25.398 00:09:25.398 ' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:25.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:25.398 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:25.399 06:15:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:31.971 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:31.971 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:31.971 Found net devices under 0000:af:00.0: cvl_0_0 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:31.971 Found net devices under 0000:af:00.1: cvl_0_1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:31.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:09:31.971 00:09:31.971 --- 10.0.0.2 ping statistics --- 00:09:31.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.971 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:09:31.971 00:09:31.971 --- 10.0.0.1 ping statistics --- 00:09:31.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.971 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.971 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=842641 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 842641 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 842641 ']' 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.972 06:15:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 [2024-12-13 06:15:22.928302] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:31.972 [2024-12-13 06:15:22.928348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.972 [2024-12-13 06:15:23.004800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.972 [2024-12-13 06:15:23.026130] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:31.972 [2024-12-13 06:15:23.026167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:31.972 [2024-12-13 06:15:23.026174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:31.972 [2024-12-13 06:15:23.026179] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:31.972 [2024-12-13 06:15:23.026188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:31.972 [2024-12-13 06:15:23.026692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 [2024-12-13 06:15:23.168692] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 [2024-12-13 06:15:23.188877] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 malloc0 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:31.972 { 00:09:31.972 "params": { 00:09:31.972 "name": "Nvme$subsystem", 00:09:31.972 "trtype": "$TEST_TRANSPORT", 00:09:31.972 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:31.972 "adrfam": "ipv4", 00:09:31.972 "trsvcid": "$NVMF_PORT", 00:09:31.972 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:31.972 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:31.972 "hdgst": ${hdgst:-false}, 00:09:31.972 "ddgst": ${ddgst:-false} 00:09:31.972 }, 00:09:31.972 "method": "bdev_nvme_attach_controller" 00:09:31.972 } 00:09:31.972 EOF 00:09:31.972 )") 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:31.972 06:15:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:31.972 "params": { 00:09:31.972 "name": "Nvme1", 00:09:31.972 "trtype": "tcp", 00:09:31.972 "traddr": "10.0.0.2", 00:09:31.972 "adrfam": "ipv4", 00:09:31.972 "trsvcid": "4420", 00:09:31.972 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:31.972 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:31.972 "hdgst": false, 00:09:31.972 "ddgst": false 00:09:31.972 }, 00:09:31.972 "method": "bdev_nvme_attach_controller" 00:09:31.972 }' 00:09:31.972 [2024-12-13 06:15:23.272002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:31.972 [2024-12-13 06:15:23.272043] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid842664 ] 00:09:31.972 [2024-12-13 06:15:23.348087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:31.972 [2024-12-13 06:15:23.370356] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.231 Running I/O for 10 seconds... 00:09:34.100 8770.00 IOPS, 68.52 MiB/s [2024-12-13T05:15:27.130Z] 8853.00 IOPS, 69.16 MiB/s [2024-12-13T05:15:28.066Z] 8862.00 IOPS, 69.23 MiB/s [2024-12-13T05:15:29.003Z] 8879.25 IOPS, 69.37 MiB/s [2024-12-13T05:15:29.939Z] 8888.60 IOPS, 69.44 MiB/s [2024-12-13T05:15:30.874Z] 8895.00 IOPS, 69.49 MiB/s [2024-12-13T05:15:31.810Z] 8873.57 IOPS, 69.32 MiB/s [2024-12-13T05:15:32.746Z] 8880.88 IOPS, 69.38 MiB/s [2024-12-13T05:15:34.125Z] 8886.11 IOPS, 69.42 MiB/s [2024-12-13T05:15:34.125Z] 8890.80 IOPS, 69.46 MiB/s 00:09:42.471 Latency(us) 00:09:42.471 [2024-12-13T05:15:34.125Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:42.471 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:42.471 Verification LBA range: start 0x0 length 0x1000 00:09:42.471 Nvme1n1 : 10.01 8892.99 69.48 0.00 0.00 14352.22 1880.26 22843.98 00:09:42.471 [2024-12-13T05:15:34.125Z] =================================================================================================================== 00:09:42.471 [2024-12-13T05:15:34.125Z] Total : 8892.99 69.48 0.00 0.00 14352.22 1880.26 22843.98 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=844449 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:42.471 { 00:09:42.471 "params": { 00:09:42.471 "name": "Nvme$subsystem", 00:09:42.471 "trtype": "$TEST_TRANSPORT", 00:09:42.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:42.471 "adrfam": "ipv4", 00:09:42.471 "trsvcid": "$NVMF_PORT", 00:09:42.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:42.471 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:42.471 "hdgst": ${hdgst:-false}, 00:09:42.471 "ddgst": ${ddgst:-false} 00:09:42.471 }, 00:09:42.471 "method": "bdev_nvme_attach_controller" 00:09:42.471 } 00:09:42.471 EOF 00:09:42.471 )") 00:09:42.471 [2024-12-13 06:15:33.882370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.471 [2024-12-13 06:15:33.882403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:42.471 06:15:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:42.471 "params": { 00:09:42.471 "name": "Nvme1", 00:09:42.471 "trtype": "tcp", 00:09:42.471 "traddr": "10.0.0.2", 00:09:42.471 "adrfam": "ipv4", 00:09:42.471 "trsvcid": "4420", 00:09:42.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:42.471 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:42.471 "hdgst": false, 00:09:42.471 "ddgst": false 00:09:42.471 }, 00:09:42.471 "method": "bdev_nvme_attach_controller" 00:09:42.471 }' 00:09:42.471 [2024-12-13 06:15:33.894361] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.471 [2024-12-13 06:15:33.894376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.471 [2024-12-13 06:15:33.906386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.471 [2024-12-13 06:15:33.906397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.471 [2024-12-13 06:15:33.918418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.471 [2024-12-13 06:15:33.918428] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.924479] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:09:42.472 [2024-12-13 06:15:33.924521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid844449 ] 00:09:42.472 [2024-12-13 06:15:33.930456] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.930467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.942486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.942498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.954624] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.954639] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.966646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.966656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.978675] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.978686] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:33.990706] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:33.990717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.000503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:42.472 [2024-12-13 06:15:34.002739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.002750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.014783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.014797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.022561] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:42.472 [2024-12-13 06:15:34.026804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.026817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.038863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.038883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.050869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.050885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.062901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.062913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.074930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.074941] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.086962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.086973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.098991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.099001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.111045] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.111065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.472 [2024-12-13 06:15:34.123067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.472 [2024-12-13 06:15:34.123082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.135101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.135117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.147134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.147150] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.159165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.159180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.208717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.208735] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.219325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.219337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 Running I/O for 5 seconds... 00:09:42.731 [2024-12-13 06:15:34.236091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.236111] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.251454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.251475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.265378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.265398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.279689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.279709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.290667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.290691] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.304502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.304521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.318395] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.318413] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.332081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.332100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.346083] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.346102] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.360326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.360345] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.731 [2024-12-13 06:15:34.374018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.731 [2024-12-13 06:15:34.374038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.387410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.387430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.401090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.401109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.414763] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.414782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.428065] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.428084] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.441920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.441940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.455175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.455194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.468944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.468963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.482635] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.482658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.496377] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.496397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.509687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.509706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.522773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.522792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.536162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.536181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.549793] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.549818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.563078] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.563099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.576669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.576688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.590491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.590509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.603834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.603852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.617325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.617344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.630904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.630924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:42.991 [2024-12-13 06:15:34.644584] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:42.991 [2024-12-13 06:15:34.644604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.658060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.658079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.671754] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.671772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.685140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.685159] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.698698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.698719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.712558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.712578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.726243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.726263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.740254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.740274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.753957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.753977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.767607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.767626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.781249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.781269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.795025] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.795045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.808366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.808390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.822006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.822026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.835961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.835981] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.847171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.847192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.861220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.861241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.874987] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.875006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.888569] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.888590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.250 [2024-12-13 06:15:34.902262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.250 [2024-12-13 06:15:34.902282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.509 [2024-12-13 06:15:34.915804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.509 [2024-12-13 06:15:34.915823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.509 [2024-12-13 06:15:34.929913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.509 [2024-12-13 06:15:34.929933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.509 [2024-12-13 06:15:34.940880] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.509 [2024-12-13 06:15:34.940899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:34.955019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:34.955038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:34.968805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:34.968826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:34.982182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:34.982202] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:34.995692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:34.995713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.009431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.009459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.022937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.022958] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.036494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.036514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.050153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.050173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.063513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.063537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.077239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.077259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.090498] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.090516] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.103965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.103984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.117518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.117538] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.131110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.131130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.144853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.144873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.510 [2024-12-13 06:15:35.158381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.510 [2024-12-13 06:15:35.158415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.171841] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.171860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.185273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.185292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.198823] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.198842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.212357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.212375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.225703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.225722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 17110.00 IOPS, 133.67 MiB/s [2024-12-13T05:15:35.422Z] [2024-12-13 06:15:35.239421] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.239440] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.252781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.252800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.266216] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.266236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.279781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.279800] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.293509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.293529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.307487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.307506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.320968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.320989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.334908] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.334927] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.348745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.768 [2024-12-13 06:15:35.348764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.768 [2024-12-13 06:15:35.362335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.769 [2024-12-13 06:15:35.362354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.769 [2024-12-13 06:15:35.375839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.769 [2024-12-13 06:15:35.375858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.769 [2024-12-13 06:15:35.389525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.769 [2024-12-13 06:15:35.389544] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.769 [2024-12-13 06:15:35.403321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.769 [2024-12-13 06:15:35.403340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:43.769 [2024-12-13 06:15:35.416772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:43.769 [2024-12-13 06:15:35.416791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.430559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.430578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.444590] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.444610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.458169] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.458188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.471943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.471963] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.485561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.485581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.499148] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.499168] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.512658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.512678] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.526387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.526406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.540363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.540382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.554022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.554041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.567782] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.567801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.581171] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.581191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.594538] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.594558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.607922] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.607942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.621246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.621265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.634495] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.634514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.648173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.648192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.661740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.661758] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.027 [2024-12-13 06:15:35.674953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.027 [2024-12-13 06:15:35.674972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.688703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.688722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.702088] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.702107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.715396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.715415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.728478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.728497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.742414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.742434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.755810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.755830] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.769351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.769370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.783145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.783165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.796872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.796891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.810392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.810411] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.824119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.824139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.837612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.837632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.850864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.850883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.864741] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.864759] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.878266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.878285] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.891437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.891461] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.904945] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.904964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.918022] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.918041] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.286 [2024-12-13 06:15:35.931205] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.286 [2024-12-13 06:15:35.931224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:35.945295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:35.945313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:35.958764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:35.958783] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:35.972414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:35.972433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:35.985836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:35.985855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:35.999442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:35.999470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.012705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.012723] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.026559] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.026577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.040245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.040264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.054059] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.054079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.064634] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.064654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.078139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.078164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.091849] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.091869] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.105487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.105506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.119164] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.119183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.132806] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.132841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.146290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.146310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.160152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.160172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.173552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.173572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.546 [2024-12-13 06:15:36.187316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.546 [2024-12-13 06:15:36.187336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.201514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.201534] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.215615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.215635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 17200.00 IOPS, 134.38 MiB/s [2024-12-13T05:15:36.459Z] [2024-12-13 06:15:36.229294] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.229314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.243238] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.243258] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.257060] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.257080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.271019] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.271039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.284661] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.284682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.298182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.298201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.311734] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.311753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.325136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.325155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.338702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.338727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.352405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.352424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.365991] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.366011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.379762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.379781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.393653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.393673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.407135] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.407155] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.420665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.420684] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.434209] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.434228] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:44.805 [2024-12-13 06:15:36.448134] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:44.805 [2024-12-13 06:15:36.448153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.064 [2024-12-13 06:15:36.462057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.462076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.475598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.475616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.489553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.489573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.503158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.503177] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.516585] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.516604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.530289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.530307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.543552] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.543571] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.557346] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.557364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.570929] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.570948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.584489] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.584508] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.598367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.598390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.611838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.611859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.625577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.625598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.639320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.639339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.652969] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.652987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.667254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.667272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.680860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.680878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.694554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.694572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.065 [2024-12-13 06:15:36.708531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.065 [2024-12-13 06:15:36.708550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.722330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.722348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.736099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.736117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.749965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.749983] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.764227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.764245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.778255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.778274] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.791984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.792002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.805787] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.805806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.818909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.818928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.833166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.833187] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.844737] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.844756] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.858699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.858727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.871895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.871913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.885563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.885582] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.899132] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.899151] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.912369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.912388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.925858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.925877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.939375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.939395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.952985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.953003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.324 [2024-12-13 06:15:36.966617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.324 [2024-12-13 06:15:36.966636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:36.980478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:36.980497] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:36.994383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:36.994403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.007642] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.007660] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.022062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.022082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.035887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.035906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.049792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.049810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.063539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.063557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.076961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.076980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.090490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.090509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.103853] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.103872] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.117876] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.117899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.131041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.131059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.144334] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.144352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.157933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.157951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.171385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.171404] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.185285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.185304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.198628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.198647] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.212021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.212040] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.584 [2024-12-13 06:15:37.225679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.584 [2024-12-13 06:15:37.225699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 17199.33 IOPS, 134.37 MiB/s [2024-12-13T05:15:37.497Z] [2024-12-13 06:15:37.239428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.239454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.253123] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.253142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.266983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.267002] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.280372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.280391] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.294182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.294201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.307490] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.307509] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.321081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.321099] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.334903] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.334922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.348276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.348295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.361481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.361505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.375064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.375082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.388723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.388742] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.402358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.402376] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.416273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.416292] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.430316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.430337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.443753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.443772] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.843 [2024-12-13 06:15:37.457992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.843 [2024-12-13 06:15:37.458013] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.844 [2024-12-13 06:15:37.468347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.844 [2024-12-13 06:15:37.468366] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.844 [2024-12-13 06:15:37.482325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.844 [2024-12-13 06:15:37.482344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:45.844 [2024-12-13 06:15:37.496002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:45.844 [2024-12-13 06:15:37.496022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.510018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.510038] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.524322] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.524342] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.537672] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.537692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.551355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.551375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.565163] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.565183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.579365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.579385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.593037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.593056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.607096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.607116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.620954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.620978] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.634623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.634644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.648113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.648133] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.661423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.661444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.675002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.675022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.688386] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.688405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.702331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.702350] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.715796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.715816] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.729379] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.729398] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.743118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.743138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.103 [2024-12-13 06:15:37.757145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.103 [2024-12-13 06:15:37.757165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.770545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.770570] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.784501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.784522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.798728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.798747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.812667] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.812687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.826206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.826225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.839957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.839975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.853826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.853845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.867441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.867468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.881444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.881475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.895080] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.895100] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.909040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.909059] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.922767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.922786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.936535] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.936554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.950965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.950984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.961655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.961675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.975602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.975620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:37.989791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:37.989809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:38.003690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:38.003718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.362 [2024-12-13 06:15:38.016999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.362 [2024-12-13 06:15:38.017018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.030848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.030867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.044723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.044741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.058331] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.058349] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.072365] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.072383] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.086290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.086310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.099822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.099841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.113805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.113823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.127702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.127721] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.141370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.141393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.154830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.154848] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.168548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.168567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.182133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.182153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.195558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.195577] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.209112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.209131] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.222809] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.222826] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 17168.25 IOPS, 134.13 MiB/s [2024-12-13T05:15:38.276Z] [2024-12-13 06:15:38.236824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.236843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.250325] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.250343] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.622 [2024-12-13 06:15:38.264276] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.622 [2024-12-13 06:15:38.264298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.278397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.278416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.289988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.290007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.303768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.303787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.317434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.317460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.330808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.330827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.344680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.344709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.358228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.358247] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.372170] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.372188] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.385840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.385860] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.399200] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.399218] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.412715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.412733] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.426362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.426381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.440286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.440304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.453586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.453605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.467245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.467264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.481137] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.481156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.494581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.494600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.508670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.508690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.519364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.519382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:46.881 [2024-12-13 06:15:38.533657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:46.881 [2024-12-13 06:15:38.533676] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.547665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.547683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.561431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.561457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.575345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.575364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.589091] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.589109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.603217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.603236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.616931] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.616950] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.630708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.630727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.644766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.644786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.658354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.658374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.672300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.672322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.685988] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.686008] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.699579] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.699598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.713228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.713248] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.726739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.726757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.740267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.740286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.140 [2024-12-13 06:15:38.753840] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.140 [2024-12-13 06:15:38.753858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.141 [2024-12-13 06:15:38.767217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.141 [2024-12-13 06:15:38.767235] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.141 [2024-12-13 06:15:38.780401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.141 [2024-12-13 06:15:38.780420] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.141 [2024-12-13 06:15:38.794273] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.141 [2024-12-13 06:15:38.794294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.808016] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.808035] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.821678] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.821699] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.835548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.835567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.849157] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.849176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.862749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.862769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.876778] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.876797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.890084] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.890103] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.903860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.903879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.917617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.917637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.931629] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.931649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.945311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.945331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.959424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.959444] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.973340] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.973360] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:38.987422] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:38.987441] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:39.001055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:39.001075] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:39.014983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:39.015003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:39.028564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:39.028584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.400 [2024-12-13 06:15:39.042453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.400 [2024-12-13 06:15:39.042474] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.056049] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.056070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.069812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.069832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.083387] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.083406] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.096766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.096786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.110520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.110540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.124606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.124626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.138262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.138281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.151836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.151855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.166047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.166074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.179692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.179711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.193868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.193886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.207512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.207531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.221161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.221180] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.235029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.235048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 17153.20 IOPS, 134.01 MiB/s 00:09:47.659 Latency(us) 00:09:47.659 [2024-12-13T05:15:39.313Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:47.659 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:47.659 Nvme1n1 : 5.01 17156.21 134.03 0.00 0.00 7454.19 3292.40 17850.76 00:09:47.659 [2024-12-13T05:15:39.313Z] =================================================================================================================== 00:09:47.659 [2024-12-13T05:15:39.313Z] Total : 17156.21 134.03 0.00 0.00 7454.19 3292.40 17850.76 00:09:47.659 [2024-12-13 06:15:39.244930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.244947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.256959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.256974] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.269007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.269023] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.281028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.281045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.293056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.293069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.659 [2024-12-13 06:15:39.305090] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.659 [2024-12-13 06:15:39.305107] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.918 [2024-12-13 06:15:39.317121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.918 [2024-12-13 06:15:39.317134] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.918 [2024-12-13 06:15:39.329152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.918 [2024-12-13 06:15:39.329167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.918 [2024-12-13 06:15:39.341183] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.918 [2024-12-13 06:15:39.341197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.919 [2024-12-13 06:15:39.353212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.919 [2024-12-13 06:15:39.353224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.919 [2024-12-13 06:15:39.365245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.919 [2024-12-13 06:15:39.365263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.919 [2024-12-13 06:15:39.377277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.919 [2024-12-13 06:15:39.377289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.919 [2024-12-13 06:15:39.389306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:47.919 [2024-12-13 06:15:39.389319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:47.919 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (844449) - No such process 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 844449 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.919 delay0 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.919 06:15:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:47.919 [2024-12-13 06:15:39.503112] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:54.484 [2024-12-13 06:15:45.596468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e662e0 is same with the state(6) to be set 00:09:54.484 [2024-12-13 06:15:45.596500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e662e0 is same with the state(6) to be set 00:09:54.484 Initializing NVMe Controllers 00:09:54.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:54.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:54.484 Initialization complete. Launching workers. 00:09:54.484 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 67 00:09:54.484 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 354, failed to submit 33 00:09:54.484 success 151, unsuccessful 203, failed 0 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:54.484 rmmod nvme_tcp 00:09:54.484 rmmod nvme_fabrics 00:09:54.484 rmmod nvme_keyring 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 842641 ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 842641 ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 842641' 00:09:54.484 killing process with pid 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 842641 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:54.484 06:15:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.389 06:15:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:56.389 00:09:56.389 real 0m31.258s 00:09:56.389 user 0m41.823s 00:09:56.389 sys 0m10.998s 00:09:56.389 06:15:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.389 06:15:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.389 ************************************ 00:09:56.389 END TEST nvmf_zcopy 00:09:56.389 ************************************ 00:09:56.389 06:15:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.389 06:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.389 06:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.389 06:15:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:56.389 ************************************ 00:09:56.389 START TEST nvmf_nmic 00:09:56.389 ************************************ 00:09:56.389 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:09:56.649 * Looking for test storage... 00:09:56.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:56.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.649 --rc genhtml_branch_coverage=1 00:09:56.649 --rc genhtml_function_coverage=1 00:09:56.649 --rc genhtml_legend=1 00:09:56.649 --rc geninfo_all_blocks=1 00:09:56.649 --rc geninfo_unexecuted_blocks=1 00:09:56.649 00:09:56.649 ' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:56.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.649 --rc genhtml_branch_coverage=1 00:09:56.649 --rc genhtml_function_coverage=1 00:09:56.649 --rc genhtml_legend=1 00:09:56.649 --rc geninfo_all_blocks=1 00:09:56.649 --rc geninfo_unexecuted_blocks=1 00:09:56.649 00:09:56.649 ' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:56.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.649 --rc genhtml_branch_coverage=1 00:09:56.649 --rc genhtml_function_coverage=1 00:09:56.649 --rc genhtml_legend=1 00:09:56.649 --rc geninfo_all_blocks=1 00:09:56.649 --rc geninfo_unexecuted_blocks=1 00:09:56.649 00:09:56.649 ' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:56.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.649 --rc genhtml_branch_coverage=1 00:09:56.649 --rc genhtml_function_coverage=1 00:09:56.649 --rc genhtml_legend=1 00:09:56.649 --rc geninfo_all_blocks=1 00:09:56.649 --rc geninfo_unexecuted_blocks=1 00:09:56.649 00:09:56.649 ' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.649 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:56.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:09:56.650 06:15:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:03.224 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:03.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:03.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:03.225 Found net devices under 0000:af:00.0: cvl_0_0 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:03.225 Found net devices under 0000:af:00.1: cvl_0_1 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:03.225 06:15:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:03.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:03.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.432 ms 00:10:03.225 00:10:03.225 --- 10.0.0.2 ping statistics --- 00:10:03.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.225 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:03.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:03.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:10:03.225 00:10:03.225 --- 10.0.0.1 ping statistics --- 00:10:03.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:03.225 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=849838 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 849838 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 849838 ']' 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:03.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.225 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.225 [2024-12-13 06:15:54.235418] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:03.225 [2024-12-13 06:15:54.235477] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:03.225 [2024-12-13 06:15:54.313139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:03.226 [2024-12-13 06:15:54.337475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:03.226 [2024-12-13 06:15:54.337516] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:03.226 [2024-12-13 06:15:54.337524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:03.226 [2024-12-13 06:15:54.337531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:03.226 [2024-12-13 06:15:54.337539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:03.226 [2024-12-13 06:15:54.338974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.226 [2024-12-13 06:15:54.339083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:03.226 [2024-12-13 06:15:54.339183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.226 [2024-12-13 06:15:54.339183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 [2024-12-13 06:15:54.479582] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 Malloc0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 [2024-12-13 06:15:54.543984] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:03.226 test case1: single bdev can't be used in multiple subsystems 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 [2024-12-13 06:15:54.571892] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:03.226 [2024-12-13 06:15:54.571914] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:03.226 [2024-12-13 06:15:54.571922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:03.226 request: 00:10:03.226 { 00:10:03.226 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:03.226 "namespace": { 00:10:03.226 "bdev_name": "Malloc0", 00:10:03.226 "no_auto_visible": false, 00:10:03.226 "hide_metadata": false 00:10:03.226 }, 00:10:03.226 "method": "nvmf_subsystem_add_ns", 00:10:03.226 "req_id": 1 00:10:03.226 } 00:10:03.226 Got JSON-RPC error response 00:10:03.226 response: 00:10:03.226 { 00:10:03.226 "code": -32602, 00:10:03.226 "message": "Invalid parameters" 00:10:03.226 } 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:03.226 Adding namespace failed - expected result. 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:03.226 test case2: host connect to nvmf target in multiple paths 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:03.226 [2024-12-13 06:15:54.584036] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.226 06:15:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:04.163 06:15:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:05.540 06:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:05.540 06:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:05.540 06:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:05.540 06:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:05.540 06:15:56 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:07.447 06:15:58 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:07.447 [global] 00:10:07.447 thread=1 00:10:07.447 invalidate=1 00:10:07.447 rw=write 00:10:07.447 time_based=1 00:10:07.447 runtime=1 00:10:07.447 ioengine=libaio 00:10:07.447 direct=1 00:10:07.447 bs=4096 00:10:07.447 iodepth=1 00:10:07.447 norandommap=0 00:10:07.447 numjobs=1 00:10:07.447 00:10:07.447 verify_dump=1 00:10:07.447 verify_backlog=512 00:10:07.447 verify_state_save=0 00:10:07.447 do_verify=1 00:10:07.447 verify=crc32c-intel 00:10:07.447 [job0] 00:10:07.447 filename=/dev/nvme0n1 00:10:07.447 Could not set queue depth (nvme0n1) 00:10:07.706 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:07.706 fio-3.35 00:10:07.706 Starting 1 thread 00:10:09.085 00:10:09.085 job0: (groupid=0, jobs=1): err= 0: pid=850778: Fri Dec 13 06:16:00 2024 00:10:09.085 read: IOPS=2724, BW=10.6MiB/s (11.2MB/s)(10.7MiB/1001msec) 00:10:09.085 slat (nsec): min=6213, max=29526, avg=7160.16, stdev=1181.79 00:10:09.085 clat (usec): min=155, max=1565, avg=196.65, stdev=41.42 00:10:09.086 lat (usec): min=162, max=1572, avg=203.81, stdev=41.45 00:10:09.086 clat percentiles (usec): 00:10:09.086 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 174], 00:10:09.086 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 186], 00:10:09.086 | 70.00th=[ 198], 80.00th=[ 221], 90.00th=[ 255], 95.00th=[ 265], 00:10:09.086 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 379], 99.95th=[ 457], 00:10:09.086 | 99.99th=[ 1565] 00:10:09.086 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:09.086 slat (nsec): min=8968, max=38234, avg=10051.11, stdev=1104.39 00:10:09.086 clat (usec): min=109, max=378, avg=130.49, stdev=13.69 00:10:09.086 lat (usec): min=118, max=416, avg=140.54, stdev=13.92 00:10:09.086 clat percentiles (usec): 00:10:09.086 | 1.00th=[ 116], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 123], 00:10:09.086 | 30.00th=[ 124], 40.00th=[ 126], 50.00th=[ 127], 60.00th=[ 129], 00:10:09.086 | 70.00th=[ 131], 80.00th=[ 135], 90.00th=[ 149], 95.00th=[ 163], 00:10:09.086 | 99.00th=[ 176], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 243], 00:10:09.086 | 99.99th=[ 379] 00:10:09.086 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:09.086 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:09.086 lat (usec) : 250=94.10%, 500=5.88% 00:10:09.086 lat (msec) : 2=0.02% 00:10:09.086 cpu : usr=2.20%, sys=5.80%, ctx=5799, majf=0, minf=1 00:10:09.086 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:09.086 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.086 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:09.086 issued rwts: total=2727,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:09.086 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:09.086 00:10:09.086 Run status group 0 (all jobs): 00:10:09.086 READ: bw=10.6MiB/s (11.2MB/s), 10.6MiB/s-10.6MiB/s (11.2MB/s-11.2MB/s), io=10.7MiB (11.2MB), run=1001-1001msec 00:10:09.086 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:09.086 00:10:09.086 Disk stats (read/write): 00:10:09.086 nvme0n1: ios=2610/2578, merge=0/0, ticks=531/329, in_queue=860, util=91.38% 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:09.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:09.086 rmmod nvme_tcp 00:10:09.086 rmmod nvme_fabrics 00:10:09.086 rmmod nvme_keyring 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 849838 ']' 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 849838 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 849838 ']' 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 849838 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 849838 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 849838' 00:10:09.086 killing process with pid 849838 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 849838 00:10:09.086 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 849838 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.346 06:16:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:11.884 00:10:11.884 real 0m14.886s 00:10:11.884 user 0m33.478s 00:10:11.884 sys 0m5.301s 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:11.884 ************************************ 00:10:11.884 END TEST nvmf_nmic 00:10:11.884 ************************************ 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.884 ************************************ 00:10:11.884 START TEST nvmf_fio_target 00:10:11.884 ************************************ 00:10:11.884 06:16:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:11.884 * Looking for test storage... 00:10:11.884 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.884 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:11.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.885 --rc genhtml_branch_coverage=1 00:10:11.885 --rc genhtml_function_coverage=1 00:10:11.885 --rc genhtml_legend=1 00:10:11.885 --rc geninfo_all_blocks=1 00:10:11.885 --rc geninfo_unexecuted_blocks=1 00:10:11.885 00:10:11.885 ' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:11.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.885 --rc genhtml_branch_coverage=1 00:10:11.885 --rc genhtml_function_coverage=1 00:10:11.885 --rc genhtml_legend=1 00:10:11.885 --rc geninfo_all_blocks=1 00:10:11.885 --rc geninfo_unexecuted_blocks=1 00:10:11.885 00:10:11.885 ' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:11.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.885 --rc genhtml_branch_coverage=1 00:10:11.885 --rc genhtml_function_coverage=1 00:10:11.885 --rc genhtml_legend=1 00:10:11.885 --rc geninfo_all_blocks=1 00:10:11.885 --rc geninfo_unexecuted_blocks=1 00:10:11.885 00:10:11.885 ' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:11.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.885 --rc genhtml_branch_coverage=1 00:10:11.885 --rc genhtml_function_coverage=1 00:10:11.885 --rc genhtml_legend=1 00:10:11.885 --rc geninfo_all_blocks=1 00:10:11.885 --rc geninfo_unexecuted_blocks=1 00:10:11.885 00:10:11.885 ' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.885 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.885 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:11.886 06:16:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:18.461 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:18.462 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:18.462 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:18.462 Found net devices under 0000:af:00.0: cvl_0_0 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:18.462 Found net devices under 0000:af:00.1: cvl_0_1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:18.462 06:16:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:18.462 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:18.462 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:10:18.462 00:10:18.462 --- 10.0.0.2 ping statistics --- 00:10:18.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.462 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:18.462 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:18.462 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.215 ms 00:10:18.462 00:10:18.462 --- 10.0.0.1 ping statistics --- 00:10:18.462 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:18.462 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=854475 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 854475 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 854475 ']' 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:18.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:18.462 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.462 [2024-12-13 06:16:09.223710] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:18.462 [2024-12-13 06:16:09.223760] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:18.462 [2024-12-13 06:16:09.302403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:18.462 [2024-12-13 06:16:09.326153] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:18.462 [2024-12-13 06:16:09.326197] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:18.462 [2024-12-13 06:16:09.326205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:18.462 [2024-12-13 06:16:09.326212] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:18.462 [2024-12-13 06:16:09.326217] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:18.463 [2024-12-13 06:16:09.327554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:18.463 [2024-12-13 06:16:09.327667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:18.463 [2024-12-13 06:16:09.327773] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.463 [2024-12-13 06:16:09.327775] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:18.463 [2024-12-13 06:16:09.645036] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:18.463 06:16:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.722 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:18.722 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.722 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:18.722 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:18.981 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:18.981 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:19.241 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.500 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:19.500 06:16:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.759 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:19.759 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:19.759 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:19.759 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:20.018 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:20.277 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.277 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:20.536 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:20.536 06:16:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:20.536 06:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:20.795 [2024-12-13 06:16:12.345165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:20.795 06:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:21.055 06:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:21.314 06:16:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:22.692 06:16:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:24.616 06:16:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:24.616 [global] 00:10:24.616 thread=1 00:10:24.616 invalidate=1 00:10:24.616 rw=write 00:10:24.616 time_based=1 00:10:24.616 runtime=1 00:10:24.616 ioengine=libaio 00:10:24.616 direct=1 00:10:24.616 bs=4096 00:10:24.616 iodepth=1 00:10:24.616 norandommap=0 00:10:24.616 numjobs=1 00:10:24.616 00:10:24.616 verify_dump=1 00:10:24.616 verify_backlog=512 00:10:24.616 verify_state_save=0 00:10:24.616 do_verify=1 00:10:24.616 verify=crc32c-intel 00:10:24.616 [job0] 00:10:24.616 filename=/dev/nvme0n1 00:10:24.616 [job1] 00:10:24.616 filename=/dev/nvme0n2 00:10:24.616 [job2] 00:10:24.616 filename=/dev/nvme0n3 00:10:24.616 [job3] 00:10:24.616 filename=/dev/nvme0n4 00:10:24.616 Could not set queue depth (nvme0n1) 00:10:24.616 Could not set queue depth (nvme0n2) 00:10:24.616 Could not set queue depth (nvme0n3) 00:10:24.616 Could not set queue depth (nvme0n4) 00:10:24.874 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.874 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.874 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.874 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:24.874 fio-3.35 00:10:24.874 Starting 4 threads 00:10:26.247 00:10:26.247 job0: (groupid=0, jobs=1): err= 0: pid=855923: Fri Dec 13 06:16:17 2024 00:10:26.247 read: IOPS=2228, BW=8915KiB/s (9129kB/s)(8924KiB/1001msec) 00:10:26.247 slat (nsec): min=6562, max=25028, avg=8247.11, stdev=1120.82 00:10:26.247 clat (usec): min=160, max=934, avg=232.01, stdev=52.36 00:10:26.247 lat (usec): min=168, max=942, avg=240.26, stdev=52.16 00:10:26.247 clat percentiles (usec): 00:10:26.247 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 196], 00:10:26.247 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 225], 60.00th=[ 235], 00:10:26.247 | 70.00th=[ 245], 80.00th=[ 251], 90.00th=[ 265], 95.00th=[ 310], 00:10:26.247 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 529], 99.95th=[ 578], 00:10:26.247 | 99.99th=[ 938] 00:10:26.247 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:26.247 slat (nsec): min=9827, max=34573, avg=11634.90, stdev=1685.68 00:10:26.247 clat (usec): min=112, max=441, avg=164.87, stdev=38.78 00:10:26.247 lat (usec): min=123, max=471, avg=176.51, stdev=38.70 00:10:26.247 clat percentiles (usec): 00:10:26.247 | 1.00th=[ 122], 5.00th=[ 129], 10.00th=[ 133], 20.00th=[ 137], 00:10:26.247 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 153], 00:10:26.247 | 70.00th=[ 165], 80.00th=[ 210], 90.00th=[ 233], 95.00th=[ 243], 00:10:26.247 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 314], 99.95th=[ 351], 00:10:26.247 | 99.99th=[ 441] 00:10:26.247 bw ( KiB/s): min= 9080, max= 9080, per=38.84%, avg=9080.00, stdev= 0.00, samples=1 00:10:26.247 iops : min= 2270, max= 2270, avg=2270.00, stdev= 0.00, samples=1 00:10:26.247 lat (usec) : 250=88.62%, 500=11.17%, 750=0.19%, 1000=0.02% 00:10:26.247 cpu : usr=2.90%, sys=4.80%, ctx=4792, majf=0, minf=1 00:10:26.247 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.247 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.247 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.247 issued rwts: total=2231,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.248 job1: (groupid=0, jobs=1): err= 0: pid=855936: Fri Dec 13 06:16:17 2024 00:10:26.248 read: IOPS=335, BW=1342KiB/s (1375kB/s)(1396KiB/1040msec) 00:10:26.248 slat (nsec): min=7651, max=26010, avg=9654.53, stdev=3113.62 00:10:26.248 clat (usec): min=196, max=41139, avg=2702.79, stdev=9698.12 00:10:26.248 lat (usec): min=204, max=41149, avg=2712.45, stdev=9700.68 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[ 204], 5.00th=[ 210], 10.00th=[ 217], 20.00th=[ 227], 00:10:26.248 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 269], 00:10:26.248 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 306], 95.00th=[41157], 00:10:26.248 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:26.248 | 99.99th=[41157] 00:10:26.248 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:10:26.248 slat (nsec): min=10734, max=70133, avg=12510.45, stdev=3277.62 00:10:26.248 clat (usec): min=135, max=307, avg=163.75, stdev=13.08 00:10:26.248 lat (usec): min=148, max=377, avg=176.26, stdev=14.74 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:10:26.248 | 30.00th=[ 157], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 165], 00:10:26.248 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 178], 95.00th=[ 184], 00:10:26.248 | 99.00th=[ 194], 99.50th=[ 202], 99.90th=[ 306], 99.95th=[ 306], 00:10:26.248 | 99.99th=[ 306] 00:10:26.248 bw ( KiB/s): min= 4096, max= 4096, per=17.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.248 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.248 lat (usec) : 250=81.30%, 500=16.26% 00:10:26.248 lat (msec) : 50=2.44% 00:10:26.248 cpu : usr=1.06%, sys=1.06%, ctx=862, majf=0, minf=1 00:10:26.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 issued rwts: total=349,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.248 job2: (groupid=0, jobs=1): err= 0: pid=855955: Fri Dec 13 06:16:17 2024 00:10:26.248 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:26.248 slat (nsec): min=6718, max=47070, avg=8572.88, stdev=1640.43 00:10:26.248 clat (usec): min=187, max=506, avg=244.76, stdev=31.97 00:10:26.248 lat (usec): min=194, max=514, avg=253.34, stdev=32.37 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[ 198], 5.00th=[ 208], 10.00th=[ 217], 20.00th=[ 227], 00:10:26.248 | 30.00th=[ 233], 40.00th=[ 239], 50.00th=[ 243], 60.00th=[ 247], 00:10:26.248 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 285], 00:10:26.248 | 99.00th=[ 420], 99.50th=[ 465], 99.90th=[ 490], 99.95th=[ 490], 00:10:26.248 | 99.99th=[ 506] 00:10:26.248 write: IOPS=2492, BW=9970KiB/s (10.2MB/s)(9980KiB/1001msec); 0 zone resets 00:10:26.248 slat (nsec): min=9487, max=40924, avg=11918.26, stdev=2027.38 00:10:26.248 clat (usec): min=119, max=1103, avg=176.21, stdev=39.35 00:10:26.248 lat (usec): min=131, max=1113, avg=188.13, stdev=39.67 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[ 128], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:10:26.248 | 30.00th=[ 151], 40.00th=[ 157], 50.00th=[ 167], 60.00th=[ 184], 00:10:26.248 | 70.00th=[ 198], 80.00th=[ 210], 90.00th=[ 225], 95.00th=[ 237], 00:10:26.248 | 99.00th=[ 265], 99.50th=[ 281], 99.90th=[ 306], 99.95th=[ 314], 00:10:26.248 | 99.99th=[ 1106] 00:10:26.248 bw ( KiB/s): min= 9320, max= 9320, per=39.86%, avg=9320.00, stdev= 0.00, samples=1 00:10:26.248 iops : min= 2330, max= 2330, avg=2330.00, stdev= 0.00, samples=1 00:10:26.248 lat (usec) : 250=84.81%, 500=15.14%, 750=0.02% 00:10:26.248 lat (msec) : 2=0.02% 00:10:26.248 cpu : usr=2.70%, sys=4.80%, ctx=4544, majf=0, minf=1 00:10:26.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 issued rwts: total=2048,2495,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.248 job3: (groupid=0, jobs=1): err= 0: pid=855960: Fri Dec 13 06:16:17 2024 00:10:26.248 read: IOPS=21, BW=85.4KiB/s (87.5kB/s)(88.0KiB/1030msec) 00:10:26.248 slat (nsec): min=9060, max=27266, avg=12780.68, stdev=5928.52 00:10:26.248 clat (usec): min=40620, max=41988, avg=41240.34, stdev=451.26 00:10:26.248 lat (usec): min=40630, max=42010, avg=41253.12, stdev=454.06 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:26.248 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:26.248 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:10:26.248 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:26.248 | 99.99th=[42206] 00:10:26.248 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:10:26.248 slat (nsec): min=10453, max=38987, avg=12007.35, stdev=2347.88 00:10:26.248 clat (usec): min=147, max=409, avg=224.82, stdev=18.40 00:10:26.248 lat (usec): min=158, max=448, avg=236.83, stdev=18.88 00:10:26.248 clat percentiles (usec): 00:10:26.248 | 1.00th=[ 180], 5.00th=[ 198], 10.00th=[ 206], 20.00th=[ 215], 00:10:26.248 | 30.00th=[ 217], 40.00th=[ 221], 50.00th=[ 225], 60.00th=[ 229], 00:10:26.248 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 243], 95.00th=[ 249], 00:10:26.248 | 99.00th=[ 262], 99.50th=[ 297], 99.90th=[ 412], 99.95th=[ 412], 00:10:26.248 | 99.99th=[ 412] 00:10:26.248 bw ( KiB/s): min= 4096, max= 4096, per=17.52%, avg=4096.00, stdev= 0.00, samples=1 00:10:26.248 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:26.248 lat (usec) : 250=91.57%, 500=4.31% 00:10:26.248 lat (msec) : 50=4.12% 00:10:26.248 cpu : usr=0.19%, sys=0.58%, ctx=538, majf=0, minf=1 00:10:26.248 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:26.248 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:26.248 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:26.248 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:26.248 00:10:26.248 Run status group 0 (all jobs): 00:10:26.248 READ: bw=17.5MiB/s (18.3MB/s), 85.4KiB/s-8915KiB/s (87.5kB/s-9129kB/s), io=18.2MiB (19.0MB), run=1001-1040msec 00:10:26.248 WRITE: bw=22.8MiB/s (23.9MB/s), 1969KiB/s-9.99MiB/s (2016kB/s-10.5MB/s), io=23.7MiB (24.9MB), run=1001-1040msec 00:10:26.248 00:10:26.248 Disk stats (read/write): 00:10:26.248 nvme0n1: ios=2011/2048, merge=0/0, ticks=1449/322, in_queue=1771, util=98.00% 00:10:26.248 nvme0n2: ios=367/512, merge=0/0, ticks=746/73, in_queue=819, util=87.18% 00:10:26.248 nvme0n3: ios=1755/2048, merge=0/0, ticks=432/360, in_queue=792, util=88.95% 00:10:26.248 nvme0n4: ios=75/512, merge=0/0, ticks=1673/111, in_queue=1784, util=98.42% 00:10:26.248 06:16:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:26.248 [global] 00:10:26.248 thread=1 00:10:26.248 invalidate=1 00:10:26.248 rw=randwrite 00:10:26.248 time_based=1 00:10:26.248 runtime=1 00:10:26.248 ioengine=libaio 00:10:26.248 direct=1 00:10:26.248 bs=4096 00:10:26.248 iodepth=1 00:10:26.248 norandommap=0 00:10:26.248 numjobs=1 00:10:26.248 00:10:26.248 verify_dump=1 00:10:26.248 verify_backlog=512 00:10:26.248 verify_state_save=0 00:10:26.248 do_verify=1 00:10:26.248 verify=crc32c-intel 00:10:26.248 [job0] 00:10:26.248 filename=/dev/nvme0n1 00:10:26.248 [job1] 00:10:26.248 filename=/dev/nvme0n2 00:10:26.248 [job2] 00:10:26.248 filename=/dev/nvme0n3 00:10:26.248 [job3] 00:10:26.248 filename=/dev/nvme0n4 00:10:26.248 Could not set queue depth (nvme0n1) 00:10:26.248 Could not set queue depth (nvme0n2) 00:10:26.248 Could not set queue depth (nvme0n3) 00:10:26.248 Could not set queue depth (nvme0n4) 00:10:26.506 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.506 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.506 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.506 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:26.506 fio-3.35 00:10:26.506 Starting 4 threads 00:10:27.881 00:10:27.881 job0: (groupid=0, jobs=1): err= 0: pid=856375: Fri Dec 13 06:16:19 2024 00:10:27.881 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:10:27.881 slat (nsec): min=8719, max=23654, avg=22506.32, stdev=3089.52 00:10:27.881 clat (usec): min=40873, max=41885, avg=41038.30, stdev=221.13 00:10:27.881 lat (usec): min=40896, max=41908, avg=41060.80, stdev=219.95 00:10:27.881 clat percentiles (usec): 00:10:27.881 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:10:27.881 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:27.881 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:27.881 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:10:27.881 | 99.99th=[41681] 00:10:27.881 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:10:27.881 slat (nsec): min=9148, max=39222, avg=11108.55, stdev=2537.28 00:10:27.881 clat (usec): min=131, max=542, avg=185.10, stdev=35.73 00:10:27.881 lat (usec): min=141, max=555, avg=196.21, stdev=36.20 00:10:27.881 clat percentiles (usec): 00:10:27.881 | 1.00th=[ 139], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:10:27.881 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 186], 00:10:27.881 | 70.00th=[ 192], 80.00th=[ 200], 90.00th=[ 212], 95.00th=[ 233], 00:10:27.881 | 99.00th=[ 334], 99.50th=[ 383], 99.90th=[ 545], 99.95th=[ 545], 00:10:27.881 | 99.99th=[ 545] 00:10:27.881 bw ( KiB/s): min= 4096, max= 4096, per=17.10%, avg=4096.00, stdev= 0.00, samples=1 00:10:27.881 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:27.881 lat (usec) : 250=94.01%, 500=1.50%, 750=0.37% 00:10:27.881 lat (msec) : 50=4.12% 00:10:27.881 cpu : usr=0.50%, sys=0.40%, ctx=537, majf=0, minf=1 00:10:27.881 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.881 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.881 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.881 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.881 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.881 job1: (groupid=0, jobs=1): err= 0: pid=856376: Fri Dec 13 06:16:19 2024 00:10:27.881 read: IOPS=1535, BW=6141KiB/s (6288kB/s)(6196KiB/1009msec) 00:10:27.881 slat (nsec): min=7292, max=43267, avg=8774.08, stdev=2120.70 00:10:27.881 clat (usec): min=165, max=41444, avg=406.11, stdev=2761.33 00:10:27.882 lat (usec): min=173, max=41452, avg=414.88, stdev=2761.67 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 200], 00:10:27.882 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:10:27.882 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 231], 95.00th=[ 239], 00:10:27.882 | 99.00th=[ 265], 99.50th=[10290], 99.90th=[41157], 99.95th=[41681], 00:10:27.882 | 99.99th=[41681] 00:10:27.882 write: IOPS=2029, BW=8119KiB/s (8314kB/s)(8192KiB/1009msec); 0 zone resets 00:10:27.882 slat (nsec): min=3587, max=35106, avg=11505.60, stdev=2279.67 00:10:27.882 clat (usec): min=117, max=462, avg=161.85, stdev=28.78 00:10:27.882 lat (usec): min=128, max=473, avg=173.36, stdev=28.71 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 126], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 143], 00:10:27.882 | 30.00th=[ 147], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 159], 00:10:27.882 | 70.00th=[ 165], 80.00th=[ 176], 90.00th=[ 194], 95.00th=[ 239], 00:10:27.882 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 310], 99.95th=[ 367], 00:10:27.882 | 99.99th=[ 461] 00:10:27.882 bw ( KiB/s): min= 6560, max= 9824, per=34.20%, avg=8192.00, stdev=2308.00, samples=2 00:10:27.882 iops : min= 1640, max= 2456, avg=2048.00, stdev=577.00, samples=2 00:10:27.882 lat (usec) : 250=98.36%, 500=1.39% 00:10:27.882 lat (msec) : 10=0.03%, 20=0.03%, 50=0.19% 00:10:27.882 cpu : usr=3.37%, sys=5.26%, ctx=3598, majf=0, minf=1 00:10:27.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 issued rwts: total=1549,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.882 job2: (groupid=0, jobs=1): err= 0: pid=856377: Fri Dec 13 06:16:19 2024 00:10:27.882 read: IOPS=564, BW=2257KiB/s (2311kB/s)(2316KiB/1026msec) 00:10:27.882 slat (nsec): min=5164, max=24481, avg=8222.90, stdev=2122.93 00:10:27.882 clat (usec): min=189, max=41065, avg=1441.27, stdev=6857.95 00:10:27.882 lat (usec): min=197, max=41089, avg=1449.49, stdev=6859.36 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 196], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:10:27.882 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 229], 60.00th=[ 233], 00:10:27.882 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 289], 00:10:27.882 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:27.882 | 99.99th=[41157] 00:10:27.882 write: IOPS=998, BW=3992KiB/s (4088kB/s)(4096KiB/1026msec); 0 zone resets 00:10:27.882 slat (nsec): min=3223, max=38068, avg=8604.89, stdev=3598.16 00:10:27.882 clat (usec): min=122, max=354, avg=169.70, stdev=32.38 00:10:27.882 lat (usec): min=125, max=364, avg=178.31, stdev=34.08 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:10:27.882 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:10:27.882 | 70.00th=[ 169], 80.00th=[ 180], 90.00th=[ 239], 95.00th=[ 243], 00:10:27.882 | 99.00th=[ 247], 99.50th=[ 251], 99.90th=[ 322], 99.95th=[ 355], 00:10:27.882 | 99.99th=[ 355] 00:10:27.882 bw ( KiB/s): min= 8192, max= 8192, per=34.20%, avg=8192.00, stdev= 0.00, samples=1 00:10:27.882 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:27.882 lat (usec) : 250=93.76%, 500=5.12% 00:10:27.882 lat (msec) : 20=0.06%, 50=1.06% 00:10:27.882 cpu : usr=1.56%, sys=1.66%, ctx=1603, majf=0, minf=2 00:10:27.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 issued rwts: total=579,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.882 job3: (groupid=0, jobs=1): err= 0: pid=856378: Fri Dec 13 06:16:19 2024 00:10:27.882 read: IOPS=2517, BW=9.83MiB/s (10.3MB/s)(9.84MiB/1001msec) 00:10:27.882 slat (nsec): min=6313, max=40194, avg=7934.20, stdev=1247.08 00:10:27.882 clat (usec): min=162, max=480, avg=212.15, stdev=28.99 00:10:27.882 lat (usec): min=171, max=505, avg=220.08, stdev=29.15 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:10:27.882 | 30.00th=[ 194], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 212], 00:10:27.882 | 70.00th=[ 223], 80.00th=[ 239], 90.00th=[ 253], 95.00th=[ 260], 00:10:27.882 | 99.00th=[ 289], 99.50th=[ 314], 99.90th=[ 433], 99.95th=[ 457], 00:10:27.882 | 99.99th=[ 482] 00:10:27.882 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:27.882 slat (nsec): min=9726, max=42795, avg=10850.90, stdev=1767.23 00:10:27.882 clat (usec): min=119, max=623, avg=157.39, stdev=22.32 00:10:27.882 lat (usec): min=129, max=634, avg=168.24, stdev=22.55 00:10:27.882 clat percentiles (usec): 00:10:27.882 | 1.00th=[ 126], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 141], 00:10:27.882 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 161], 00:10:27.882 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 178], 95.00th=[ 186], 00:10:27.882 | 99.00th=[ 208], 99.50th=[ 231], 99.90th=[ 433], 99.95th=[ 515], 00:10:27.882 | 99.99th=[ 627] 00:10:27.882 bw ( KiB/s): min=11224, max=11224, per=46.86%, avg=11224.00, stdev= 0.00, samples=1 00:10:27.882 iops : min= 2806, max= 2806, avg=2806.00, stdev= 0.00, samples=1 00:10:27.882 lat (usec) : 250=94.17%, 500=5.79%, 750=0.04% 00:10:27.882 cpu : usr=4.80%, sys=7.10%, ctx=5080, majf=0, minf=1 00:10:27.882 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:27.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:27.882 issued rwts: total=2520,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:27.882 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:27.882 00:10:27.882 Run status group 0 (all jobs): 00:10:27.882 READ: bw=17.8MiB/s (18.6MB/s), 87.5KiB/s-9.83MiB/s (89.6kB/s-10.3MB/s), io=18.2MiB (19.1MB), run=1001-1026msec 00:10:27.882 WRITE: bw=23.4MiB/s (24.5MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1026msec 00:10:27.882 00:10:27.882 Disk stats (read/write): 00:10:27.882 nvme0n1: ios=67/512, merge=0/0, ticks=1105/89, in_queue=1194, util=89.48% 00:10:27.882 nvme0n2: ios=1575/2048, merge=0/0, ticks=626/299, in_queue=925, util=98.25% 00:10:27.882 nvme0n3: ios=629/1024, merge=0/0, ticks=849/170, in_queue=1019, util=97.73% 00:10:27.882 nvme0n4: ios=1997/2048, merge=0/0, ticks=429/312, in_queue=741, util=90.09% 00:10:27.882 06:16:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:27.882 [global] 00:10:27.882 thread=1 00:10:27.882 invalidate=1 00:10:27.882 rw=write 00:10:27.882 time_based=1 00:10:27.882 runtime=1 00:10:27.882 ioengine=libaio 00:10:27.882 direct=1 00:10:27.882 bs=4096 00:10:27.882 iodepth=128 00:10:27.882 norandommap=0 00:10:27.882 numjobs=1 00:10:27.882 00:10:27.882 verify_dump=1 00:10:27.882 verify_backlog=512 00:10:27.882 verify_state_save=0 00:10:27.882 do_verify=1 00:10:27.882 verify=crc32c-intel 00:10:27.882 [job0] 00:10:27.882 filename=/dev/nvme0n1 00:10:27.882 [job1] 00:10:27.882 filename=/dev/nvme0n2 00:10:27.882 [job2] 00:10:27.882 filename=/dev/nvme0n3 00:10:27.882 [job3] 00:10:27.882 filename=/dev/nvme0n4 00:10:27.882 Could not set queue depth (nvme0n1) 00:10:27.882 Could not set queue depth (nvme0n2) 00:10:27.882 Could not set queue depth (nvme0n3) 00:10:27.882 Could not set queue depth (nvme0n4) 00:10:27.882 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.882 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.882 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.882 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:27.882 fio-3.35 00:10:27.882 Starting 4 threads 00:10:29.257 00:10:29.257 job0: (groupid=0, jobs=1): err= 0: pid=856744: Fri Dec 13 06:16:20 2024 00:10:29.257 read: IOPS=4776, BW=18.7MiB/s (19.6MB/s)(19.5MiB/1044msec) 00:10:29.257 slat (nsec): min=1171, max=12826k, avg=102474.53, stdev=712681.29 00:10:29.257 clat (usec): min=4013, max=54418, avg=13466.26, stdev=7307.91 00:10:29.257 lat (usec): min=4016, max=59460, avg=13568.73, stdev=7343.59 00:10:29.257 clat percentiles (usec): 00:10:29.257 | 1.00th=[ 5342], 5.00th=[ 7439], 10.00th=[ 8848], 20.00th=[ 9765], 00:10:29.257 | 30.00th=[10683], 40.00th=[11469], 50.00th=[11600], 60.00th=[11994], 00:10:29.257 | 70.00th=[13173], 80.00th=[15533], 90.00th=[18482], 95.00th=[20841], 00:10:29.257 | 99.00th=[53740], 99.50th=[54264], 99.90th=[54264], 99.95th=[54264], 00:10:29.257 | 99.99th=[54264] 00:10:29.257 write: IOPS=4904, BW=19.2MiB/s (20.1MB/s)(20.0MiB/1044msec); 0 zone resets 00:10:29.257 slat (nsec): min=1897, max=10737k, avg=89545.68, stdev=548301.37 00:10:29.257 clat (usec): min=1029, max=67429, avg=12692.35, stdev=6288.74 00:10:29.257 lat (usec): min=1036, max=67433, avg=12781.89, stdev=6322.13 00:10:29.257 clat percentiles (usec): 00:10:29.257 | 1.00th=[ 3884], 5.00th=[ 6915], 10.00th=[ 8848], 20.00th=[10028], 00:10:29.257 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11731], 00:10:29.257 | 70.00th=[11994], 80.00th=[13698], 90.00th=[17171], 95.00th=[23200], 00:10:29.257 | 99.00th=[49546], 99.50th=[50070], 99.90th=[65274], 99.95th=[65274], 00:10:29.257 | 99.99th=[67634] 00:10:29.257 bw ( KiB/s): min=19768, max=21192, per=28.09%, avg=20480.00, stdev=1006.92, samples=2 00:10:29.257 iops : min= 4942, max= 5298, avg=5120.00, stdev=251.73, samples=2 00:10:29.257 lat (msec) : 2=0.22%, 4=0.46%, 10=21.62%, 20=70.88%, 50=6.03% 00:10:29.257 lat (msec) : 100=0.80% 00:10:29.257 cpu : usr=3.07%, sys=5.94%, ctx=476, majf=0, minf=2 00:10:29.257 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:10:29.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.258 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.258 job1: (groupid=0, jobs=1): err= 0: pid=856745: Fri Dec 13 06:16:20 2024 00:10:29.258 read: IOPS=4442, BW=17.4MiB/s (18.2MB/s)(17.4MiB/1005msec) 00:10:29.258 slat (nsec): min=1109, max=22859k, avg=95618.87, stdev=703600.50 00:10:29.258 clat (usec): min=1360, max=58455, avg=13108.16, stdev=6526.35 00:10:29.258 lat (usec): min=2599, max=65811, avg=13203.78, stdev=6569.63 00:10:29.258 clat percentiles (usec): 00:10:29.258 | 1.00th=[ 3458], 5.00th=[ 6325], 10.00th=[ 7111], 20.00th=[ 9634], 00:10:29.258 | 30.00th=[11207], 40.00th=[11863], 50.00th=[12125], 60.00th=[12518], 00:10:29.258 | 70.00th=[14091], 80.00th=[15008], 90.00th=[16909], 95.00th=[22152], 00:10:29.258 | 99.00th=[48497], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:10:29.258 | 99.99th=[58459] 00:10:29.258 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:29.258 slat (nsec): min=1964, max=40952k, avg=101021.94, stdev=1089824.89 00:10:29.258 clat (usec): min=1281, max=113123, avg=13647.59, stdev=11999.24 00:10:29.258 lat (usec): min=1289, max=113133, avg=13748.61, stdev=12093.78 00:10:29.258 clat percentiles (msec): 00:10:29.258 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 9], 20.00th=[ 11], 00:10:29.258 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:10:29.258 | 70.00th=[ 13], 80.00th=[ 13], 90.00th=[ 15], 95.00th=[ 21], 00:10:29.258 | 99.00th=[ 83], 99.50th=[ 113], 99.90th=[ 113], 99.95th=[ 113], 00:10:29.258 | 99.99th=[ 113] 00:10:29.258 bw ( KiB/s): min=16384, max=20480, per=25.28%, avg=18432.00, stdev=2896.31, samples=2 00:10:29.258 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:29.258 lat (msec) : 2=0.17%, 4=1.07%, 10=17.68%, 20=75.39%, 50=4.00% 00:10:29.258 lat (msec) : 100=1.36%, 250=0.34% 00:10:29.258 cpu : usr=3.29%, sys=5.18%, ctx=409, majf=0, minf=1 00:10:29.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:29.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.258 issued rwts: total=4465,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.258 job2: (groupid=0, jobs=1): err= 0: pid=856747: Fri Dec 13 06:16:20 2024 00:10:29.258 read: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec) 00:10:29.258 slat (nsec): min=1373, max=11816k, avg=107203.14, stdev=773319.05 00:10:29.258 clat (usec): min=4738, max=32572, avg=13794.23, stdev=3973.66 00:10:29.258 lat (usec): min=4746, max=32580, avg=13901.43, stdev=4032.64 00:10:29.258 clat percentiles (usec): 00:10:29.258 | 1.00th=[ 5538], 5.00th=[ 8356], 10.00th=[ 9765], 20.00th=[10683], 00:10:29.258 | 30.00th=[11994], 40.00th=[12649], 50.00th=[13173], 60.00th=[13435], 00:10:29.258 | 70.00th=[14484], 80.00th=[16712], 90.00th=[19530], 95.00th=[21365], 00:10:29.258 | 99.00th=[25297], 99.50th=[29230], 99.90th=[32637], 99.95th=[32637], 00:10:29.258 | 99.99th=[32637] 00:10:29.258 write: IOPS=4358, BW=17.0MiB/s (17.9MB/s)(17.2MiB/1009msec); 0 zone resets 00:10:29.258 slat (usec): min=2, max=10618, avg=109.97, stdev=618.81 00:10:29.258 clat (usec): min=384, max=66799, avg=16210.23, stdev=12309.81 00:10:29.258 lat (usec): min=396, max=66814, avg=16320.20, stdev=12389.90 00:10:29.258 clat percentiles (usec): 00:10:29.258 | 1.00th=[ 1942], 5.00th=[ 4178], 10.00th=[ 6980], 20.00th=[ 9896], 00:10:29.258 | 30.00th=[11207], 40.00th=[11994], 50.00th=[12649], 60.00th=[13173], 00:10:29.258 | 70.00th=[13435], 80.00th=[19530], 90.00th=[32637], 95.00th=[49546], 00:10:29.258 | 99.00th=[60556], 99.50th=[61604], 99.90th=[66847], 99.95th=[66847], 00:10:29.258 | 99.99th=[66847] 00:10:29.258 bw ( KiB/s): min=16384, max=17784, per=23.43%, avg=17084.00, stdev=989.95, samples=2 00:10:29.258 iops : min= 4096, max= 4446, avg=4271.00, stdev=247.49, samples=2 00:10:29.258 lat (usec) : 500=0.02%, 750=0.11%, 1000=0.12% 00:10:29.258 lat (msec) : 2=0.32%, 4=1.60%, 10=13.72%, 20=70.25%, 50=11.41% 00:10:29.258 lat (msec) : 100=2.46% 00:10:29.258 cpu : usr=4.17%, sys=5.26%, ctx=473, majf=0, minf=1 00:10:29.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:29.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.258 issued rwts: total=4096,4398,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.258 job3: (groupid=0, jobs=1): err= 0: pid=856748: Fri Dec 13 06:16:20 2024 00:10:29.258 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:10:29.258 slat (nsec): min=1117, max=20274k, avg=107553.80, stdev=801617.45 00:10:29.258 clat (usec): min=4156, max=44503, avg=13953.90, stdev=5702.74 00:10:29.258 lat (usec): min=4232, max=44527, avg=14061.45, stdev=5744.53 00:10:29.258 clat percentiles (usec): 00:10:29.258 | 1.00th=[ 7373], 5.00th=[10028], 10.00th=[10421], 20.00th=[10814], 00:10:29.258 | 30.00th=[11076], 40.00th=[11994], 50.00th=[13042], 60.00th=[13698], 00:10:29.258 | 70.00th=[14091], 80.00th=[15139], 90.00th=[16581], 95.00th=[26084], 00:10:29.258 | 99.00th=[39584], 99.50th=[39584], 99.90th=[43779], 99.95th=[43779], 00:10:29.258 | 99.99th=[44303] 00:10:29.258 write: IOPS=4887, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1003msec); 0 zone resets 00:10:29.258 slat (nsec): min=1956, max=13068k, avg=91766.95, stdev=556479.42 00:10:29.258 clat (usec): min=486, max=46320, avg=12854.77, stdev=4624.49 00:10:29.258 lat (usec): min=1234, max=46329, avg=12946.53, stdev=4663.55 00:10:29.258 clat percentiles (usec): 00:10:29.258 | 1.00th=[ 3359], 5.00th=[ 6783], 10.00th=[ 9241], 20.00th=[10290], 00:10:29.258 | 30.00th=[10814], 40.00th=[12125], 50.00th=[12911], 60.00th=[13435], 00:10:29.258 | 70.00th=[13566], 80.00th=[13960], 90.00th=[16450], 95.00th=[17695], 00:10:29.258 | 99.00th=[35390], 99.50th=[46400], 99.90th=[46400], 99.95th=[46400], 00:10:29.258 | 99.99th=[46400] 00:10:29.258 bw ( KiB/s): min=17712, max=20480, per=26.19%, avg=19096.00, stdev=1957.27, samples=2 00:10:29.258 iops : min= 4428, max= 5120, avg=4774.00, stdev=489.32, samples=2 00:10:29.258 lat (usec) : 500=0.01% 00:10:29.258 lat (msec) : 2=0.07%, 4=0.63%, 10=10.03%, 20=84.19%, 50=5.07% 00:10:29.258 cpu : usr=3.99%, sys=4.39%, ctx=481, majf=0, minf=2 00:10:29.258 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:29.258 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:29.258 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:29.258 issued rwts: total=4608,4902,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:29.258 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:29.258 00:10:29.258 Run status group 0 (all jobs): 00:10:29.258 READ: bw=67.9MiB/s (71.2MB/s), 15.9MiB/s-18.7MiB/s (16.6MB/s-19.6MB/s), io=70.9MiB (74.4MB), run=1003-1044msec 00:10:29.258 WRITE: bw=71.2MiB/s (74.7MB/s), 17.0MiB/s-19.2MiB/s (17.9MB/s-20.1MB/s), io=74.3MiB (77.9MB), run=1003-1044msec 00:10:29.258 00:10:29.258 Disk stats (read/write): 00:10:29.258 nvme0n1: ios=3861/4096, merge=0/0, ticks=31621/33305, in_queue=64926, util=99.40% 00:10:29.258 nvme0n2: ios=3415/3584, merge=0/0, ticks=32583/29939, in_queue=62522, util=99.18% 00:10:29.258 nvme0n3: ios=3129/3471, merge=0/0, ticks=40792/58941, in_queue=99733, util=97.73% 00:10:29.258 nvme0n4: ios=3584/4096, merge=0/0, ticks=32579/29843, in_queue=62422, util=89.21% 00:10:29.258 06:16:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:29.258 [global] 00:10:29.258 thread=1 00:10:29.258 invalidate=1 00:10:29.258 rw=randwrite 00:10:29.258 time_based=1 00:10:29.258 runtime=1 00:10:29.258 ioengine=libaio 00:10:29.258 direct=1 00:10:29.258 bs=4096 00:10:29.258 iodepth=128 00:10:29.258 norandommap=0 00:10:29.258 numjobs=1 00:10:29.258 00:10:29.258 verify_dump=1 00:10:29.258 verify_backlog=512 00:10:29.258 verify_state_save=0 00:10:29.258 do_verify=1 00:10:29.258 verify=crc32c-intel 00:10:29.258 [job0] 00:10:29.258 filename=/dev/nvme0n1 00:10:29.258 [job1] 00:10:29.258 filename=/dev/nvme0n2 00:10:29.258 [job2] 00:10:29.258 filename=/dev/nvme0n3 00:10:29.258 [job3] 00:10:29.258 filename=/dev/nvme0n4 00:10:29.258 Could not set queue depth (nvme0n1) 00:10:29.258 Could not set queue depth (nvme0n2) 00:10:29.258 Could not set queue depth (nvme0n3) 00:10:29.258 Could not set queue depth (nvme0n4) 00:10:29.515 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.515 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.515 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.515 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:29.515 fio-3.35 00:10:29.515 Starting 4 threads 00:10:30.890 00:10:30.890 job0: (groupid=0, jobs=1): err= 0: pid=857116: Fri Dec 13 06:16:22 2024 00:10:30.890 read: IOPS=2965, BW=11.6MiB/s (12.1MB/s)(11.7MiB/1006msec) 00:10:30.890 slat (nsec): min=1185, max=21683k, avg=187244.07, stdev=1009168.55 00:10:30.890 clat (usec): min=3245, max=61629, avg=23979.10, stdev=13663.46 00:10:30.890 lat (usec): min=5786, max=61659, avg=24166.35, stdev=13734.34 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[ 6783], 5.00th=[ 8717], 10.00th=[ 9896], 20.00th=[10159], 00:10:30.890 | 30.00th=[10421], 40.00th=[15664], 50.00th=[25035], 60.00th=[27919], 00:10:30.890 | 70.00th=[29230], 80.00th=[35914], 90.00th=[39584], 95.00th=[51643], 00:10:30.890 | 99.00th=[60556], 99.50th=[61604], 99.90th=[61604], 99.95th=[61604], 00:10:30.890 | 99.99th=[61604] 00:10:30.890 write: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec); 0 zone resets 00:10:30.890 slat (nsec): min=1837, max=11942k, avg=135690.36, stdev=850735.85 00:10:30.890 clat (usec): min=5423, max=42027, avg=18112.36, stdev=8908.80 00:10:30.890 lat (usec): min=5431, max=44455, avg=18248.05, stdev=8957.18 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[ 7439], 5.00th=[ 8717], 10.00th=[ 9372], 20.00th=[10290], 00:10:30.890 | 30.00th=[10683], 40.00th=[14484], 50.00th=[16319], 60.00th=[17695], 00:10:30.890 | 70.00th=[21365], 80.00th=[23987], 90.00th=[32637], 95.00th=[39060], 00:10:30.890 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:30.890 | 99.99th=[42206] 00:10:30.890 bw ( KiB/s): min=12288, max=12288, per=19.36%, avg=12288.00, stdev= 0.00, samples=2 00:10:30.890 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:10:30.890 lat (msec) : 4=0.02%, 10=15.33%, 20=39.42%, 50=42.46%, 100=2.77% 00:10:30.890 cpu : usr=2.09%, sys=5.07%, ctx=293, majf=0, minf=1 00:10:30.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.890 issued rwts: total=2983,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.890 job1: (groupid=0, jobs=1): err= 0: pid=857117: Fri Dec 13 06:16:22 2024 00:10:30.890 read: IOPS=3090, BW=12.1MiB/s (12.7MB/s)(12.2MiB/1009msec) 00:10:30.890 slat (nsec): min=1374, max=11915k, avg=120679.55, stdev=727766.86 00:10:30.890 clat (usec): min=7093, max=44656, avg=13804.28, stdev=6018.29 00:10:30.890 lat (usec): min=7104, max=44661, avg=13924.96, stdev=6082.27 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[ 7439], 5.00th=[ 9110], 10.00th=[10290], 20.00th=[10552], 00:10:30.890 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11600], 60.00th=[11994], 00:10:30.890 | 70.00th=[12780], 80.00th=[15270], 90.00th=[20841], 95.00th=[29230], 00:10:30.890 | 99.00th=[38536], 99.50th=[41157], 99.90th=[44827], 99.95th=[44827], 00:10:30.890 | 99.99th=[44827] 00:10:30.890 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:10:30.890 slat (nsec): min=1956, max=10460k, avg=163324.71, stdev=663716.51 00:10:30.890 clat (usec): min=2827, max=46137, avg=23676.77, stdev=11080.83 00:10:30.890 lat (usec): min=2837, max=46147, avg=23840.09, stdev=11154.42 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[ 4883], 5.00th=[ 7963], 10.00th=[ 8848], 20.00th=[10159], 00:10:30.890 | 30.00th=[15926], 40.00th=[21627], 50.00th=[25822], 60.00th=[27919], 00:10:30.890 | 70.00th=[29230], 80.00th=[34341], 90.00th=[38536], 95.00th=[40109], 00:10:30.890 | 99.00th=[45351], 99.50th=[45351], 99.90th=[45876], 99.95th=[45876], 00:10:30.890 | 99.99th=[45876] 00:10:30.890 bw ( KiB/s): min=12744, max=15280, per=22.08%, avg=14012.00, stdev=1793.22, samples=2 00:10:30.890 iops : min= 3186, max= 3820, avg=3503.00, stdev=448.31, samples=2 00:10:30.890 lat (msec) : 4=0.27%, 10=13.19%, 20=46.49%, 50=40.05% 00:10:30.890 cpu : usr=2.48%, sys=4.86%, ctx=425, majf=0, minf=1 00:10:30.890 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:10:30.890 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.890 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.890 issued rwts: total=3118,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.890 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.890 job2: (groupid=0, jobs=1): err= 0: pid=857118: Fri Dec 13 06:16:22 2024 00:10:30.890 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:10:30.890 slat (nsec): min=1953, max=18130k, avg=238112.24, stdev=1256321.24 00:10:30.890 clat (usec): min=16067, max=77945, avg=29741.75, stdev=12459.74 00:10:30.890 lat (usec): min=16080, max=77973, avg=29979.86, stdev=12558.16 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[16909], 5.00th=[17957], 10.00th=[20055], 20.00th=[20841], 00:10:30.890 | 30.00th=[21365], 40.00th=[21890], 50.00th=[24773], 60.00th=[27395], 00:10:30.890 | 70.00th=[32113], 80.00th=[37487], 90.00th=[48497], 95.00th=[57410], 00:10:30.890 | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[76022], 00:10:30.890 | 99.99th=[78119] 00:10:30.890 write: IOPS=2681, BW=10.5MiB/s (11.0MB/s)(10.5MiB/1006msec); 0 zone resets 00:10:30.890 slat (usec): min=2, max=7497, avg=135.44, stdev=668.64 00:10:30.890 clat (usec): min=4716, max=61007, avg=18727.15, stdev=7781.74 00:10:30.890 lat (usec): min=5726, max=61017, avg=18862.59, stdev=7807.22 00:10:30.890 clat percentiles (usec): 00:10:30.890 | 1.00th=[ 9372], 5.00th=[11863], 10.00th=[12518], 20.00th=[14353], 00:10:30.890 | 30.00th=[14746], 40.00th=[15139], 50.00th=[16909], 60.00th=[17171], 00:10:30.891 | 70.00th=[19792], 80.00th=[21627], 90.00th=[26346], 95.00th=[30278], 00:10:30.891 | 99.00th=[56361], 99.50th=[57410], 99.90th=[58983], 99.95th=[61080], 00:10:30.891 | 99.99th=[61080] 00:10:30.891 bw ( KiB/s): min= 8312, max=12288, per=16.23%, avg=10300.00, stdev=2811.46, samples=2 00:10:30.891 iops : min= 2078, max= 3072, avg=2575.00, stdev=702.86, samples=2 00:10:30.891 lat (msec) : 10=1.03%, 20=39.86%, 50=53.73%, 100=5.38% 00:10:30.891 cpu : usr=3.18%, sys=4.38%, ctx=248, majf=0, minf=1 00:10:30.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:10:30.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.891 issued rwts: total=2560,2698,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.891 job3: (groupid=0, jobs=1): err= 0: pid=857119: Fri Dec 13 06:16:22 2024 00:10:30.891 read: IOPS=6238, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1005msec) 00:10:30.891 slat (nsec): min=1297, max=27496k, avg=83552.17, stdev=645302.42 00:10:30.891 clat (usec): min=1087, max=49349, avg=10545.30, stdev=4590.76 00:10:30.891 lat (usec): min=3717, max=49375, avg=10628.85, stdev=4623.30 00:10:30.891 clat percentiles (usec): 00:10:30.891 | 1.00th=[ 5669], 5.00th=[ 6849], 10.00th=[ 7504], 20.00th=[ 8586], 00:10:30.891 | 30.00th=[ 8979], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9634], 00:10:30.891 | 70.00th=[10814], 80.00th=[11469], 90.00th=[13698], 95.00th=[15795], 00:10:30.891 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:30.891 | 99.99th=[49546] 00:10:30.891 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:10:30.891 slat (usec): min=2, max=9113, avg=65.91, stdev=356.43 00:10:30.891 clat (usec): min=1082, max=29814, avg=9216.76, stdev=1850.56 00:10:30.891 lat (usec): min=1702, max=38928, avg=9282.67, stdev=1900.06 00:10:30.891 clat percentiles (usec): 00:10:30.891 | 1.00th=[ 3752], 5.00th=[ 5604], 10.00th=[ 7046], 20.00th=[ 8586], 00:10:30.891 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9241], 60.00th=[ 9372], 00:10:30.891 | 70.00th=[ 9503], 80.00th=[10159], 90.00th=[11338], 95.00th=[11731], 00:10:30.891 | 99.00th=[12780], 99.50th=[13304], 99.90th=[29754], 99.95th=[29754], 00:10:30.891 | 99.99th=[29754] 00:10:30.891 bw ( KiB/s): min=24576, max=28656, per=41.94%, avg=26616.00, stdev=2885.00, samples=2 00:10:30.891 iops : min= 6144, max= 7164, avg=6654.00, stdev=721.25, samples=2 00:10:30.891 lat (msec) : 2=0.02%, 4=1.00%, 10=70.56%, 20=26.81%, 50=1.62% 00:10:30.891 cpu : usr=4.78%, sys=7.27%, ctx=774, majf=0, minf=1 00:10:30.891 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:30.891 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:30.891 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:30.891 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:30.891 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:30.891 00:10:30.891 Run status group 0 (all jobs): 00:10:30.891 READ: bw=57.8MiB/s (60.6MB/s), 9.94MiB/s-24.4MiB/s (10.4MB/s-25.6MB/s), io=58.3MiB (61.2MB), run=1005-1009msec 00:10:30.891 WRITE: bw=62.0MiB/s (65.0MB/s), 10.5MiB/s-25.9MiB/s (11.0MB/s-27.1MB/s), io=62.5MiB (65.6MB), run=1005-1009msec 00:10:30.891 00:10:30.891 Disk stats (read/write): 00:10:30.891 nvme0n1: ios=2599/2703, merge=0/0, ticks=19240/15571, in_queue=34811, util=96.19% 00:10:30.891 nvme0n2: ios=2573/3055, merge=0/0, ticks=32604/67091, in_queue=99695, util=91.07% 00:10:30.891 nvme0n3: ios=2089/2519, merge=0/0, ticks=21618/13151, in_queue=34769, util=96.88% 00:10:30.891 nvme0n4: ios=5200/5632, merge=0/0, ticks=37989/37256, in_queue=75245, util=89.62% 00:10:30.891 06:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:30.891 06:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=857340 00:10:30.891 06:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:30.891 06:16:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:30.891 [global] 00:10:30.891 thread=1 00:10:30.891 invalidate=1 00:10:30.891 rw=read 00:10:30.891 time_based=1 00:10:30.891 runtime=10 00:10:30.891 ioengine=libaio 00:10:30.891 direct=1 00:10:30.891 bs=4096 00:10:30.891 iodepth=1 00:10:30.891 norandommap=1 00:10:30.891 numjobs=1 00:10:30.891 00:10:30.891 [job0] 00:10:30.891 filename=/dev/nvme0n1 00:10:30.891 [job1] 00:10:30.891 filename=/dev/nvme0n2 00:10:30.891 [job2] 00:10:30.891 filename=/dev/nvme0n3 00:10:30.891 [job3] 00:10:30.891 filename=/dev/nvme0n4 00:10:30.891 Could not set queue depth (nvme0n1) 00:10:30.891 Could not set queue depth (nvme0n2) 00:10:30.891 Could not set queue depth (nvme0n3) 00:10:30.891 Could not set queue depth (nvme0n4) 00:10:31.149 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.149 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.149 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.149 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:31.149 fio-3.35 00:10:31.149 Starting 4 threads 00:10:34.438 06:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:34.438 06:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:34.438 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=286720, buflen=4096 00:10:34.438 fio: pid=857484, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.438 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=53698560, buflen=4096 00:10:34.438 fio: pid=857481, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.438 06:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.438 06:16:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:34.438 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=46395392, buflen=4096 00:10:34.438 fio: pid=857479, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.438 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.438 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:34.696 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=352256, buflen=4096 00:10:34.696 fio: pid=857480, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:34.696 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.696 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:34.696 00:10:34.696 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857479: Fri Dec 13 06:16:26 2024 00:10:34.696 read: IOPS=3547, BW=13.9MiB/s (14.5MB/s)(44.2MiB/3193msec) 00:10:34.696 slat (usec): min=6, max=30704, avg=13.37, stdev=353.26 00:10:34.696 clat (usec): min=161, max=41344, avg=265.37, stdev=856.57 00:10:34.696 lat (usec): min=167, max=41353, avg=278.75, stdev=927.36 00:10:34.696 clat percentiles (usec): 00:10:34.696 | 1.00th=[ 188], 5.00th=[ 215], 10.00th=[ 233], 20.00th=[ 239], 00:10:34.696 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 251], 00:10:34.697 | 70.00th=[ 255], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:10:34.697 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 482], 99.95th=[ 791], 00:10:34.697 | 99.99th=[41157] 00:10:34.697 bw ( KiB/s): min= 9064, max=15520, per=49.19%, avg=14198.50, stdev=2573.07, samples=6 00:10:34.697 iops : min= 2266, max= 3880, avg=3549.50, stdev=643.27, samples=6 00:10:34.697 lat (usec) : 250=53.76%, 500=46.14%, 750=0.04%, 1000=0.01% 00:10:34.697 lat (msec) : 50=0.04% 00:10:34.697 cpu : usr=0.60%, sys=3.60%, ctx=11334, majf=0, minf=1 00:10:34.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 issued rwts: total=11328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.697 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857480: Fri Dec 13 06:16:26 2024 00:10:34.697 read: IOPS=25, BW=101KiB/s (103kB/s)(344KiB/3408msec) 00:10:34.697 slat (usec): min=9, max=11850, avg=250.38, stdev=1519.60 00:10:34.697 clat (usec): min=235, max=41954, avg=39120.72, stdev=8617.71 00:10:34.697 lat (usec): min=259, max=49135, avg=39236.21, stdev=8681.74 00:10:34.697 clat percentiles (usec): 00:10:34.697 | 1.00th=[ 235], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:34.697 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:10:34.697 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:34.697 | 99.99th=[42206] 00:10:34.697 bw ( KiB/s): min= 96, max= 106, per=0.35%, avg=101.67, stdev= 4.46, samples=6 00:10:34.697 iops : min= 24, max= 26, avg=25.33, stdev= 1.03, samples=6 00:10:34.697 lat (usec) : 250=2.30%, 500=2.30% 00:10:34.697 lat (msec) : 50=94.25% 00:10:34.697 cpu : usr=0.12%, sys=0.00%, ctx=90, majf=0, minf=2 00:10:34.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 issued rwts: total=87,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.697 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857481: Fri Dec 13 06:16:26 2024 00:10:34.697 read: IOPS=4418, BW=17.3MiB/s (18.1MB/s)(51.2MiB/2967msec) 00:10:34.697 slat (nsec): min=7188, max=45982, avg=8571.17, stdev=1485.86 00:10:34.697 clat (usec): min=162, max=40793, avg=214.23, stdev=355.03 00:10:34.697 lat (usec): min=173, max=40802, avg=222.81, stdev=355.04 00:10:34.697 clat percentiles (usec): 00:10:34.697 | 1.00th=[ 182], 5.00th=[ 190], 10.00th=[ 194], 20.00th=[ 198], 00:10:34.697 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:10:34.697 | 70.00th=[ 219], 80.00th=[ 223], 90.00th=[ 231], 95.00th=[ 237], 00:10:34.697 | 99.00th=[ 258], 99.50th=[ 269], 99.90th=[ 297], 99.95th=[ 408], 00:10:34.697 | 99.99th=[ 1663] 00:10:34.697 bw ( KiB/s): min=16320, max=18240, per=61.42%, avg=17729.60, stdev=825.71, samples=5 00:10:34.697 iops : min= 4080, max= 4560, avg=4432.40, stdev=206.43, samples=5 00:10:34.697 lat (usec) : 250=98.25%, 500=1.72%, 750=0.01% 00:10:34.697 lat (msec) : 2=0.01%, 50=0.01% 00:10:34.697 cpu : usr=2.33%, sys=7.35%, ctx=13112, majf=0, minf=2 00:10:34.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 issued rwts: total=13111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.697 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=857484: Fri Dec 13 06:16:26 2024 00:10:34.697 read: IOPS=25, BW=102KiB/s (104kB/s)(280KiB/2758msec) 00:10:34.697 slat (nsec): min=6847, max=36298, avg=19039.89, stdev=5816.89 00:10:34.697 clat (usec): min=357, max=41357, avg=39074.07, stdev=8358.33 00:10:34.697 lat (usec): min=367, max=41364, avg=39093.06, stdev=8358.00 00:10:34.697 clat percentiles (usec): 00:10:34.697 | 1.00th=[ 359], 5.00th=[28967], 10.00th=[40633], 20.00th=[41157], 00:10:34.697 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:34.697 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:34.697 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:34.697 | 99.99th=[41157] 00:10:34.697 bw ( KiB/s): min= 96, max= 112, per=0.35%, avg=102.40, stdev= 6.69, samples=5 00:10:34.697 iops : min= 24, max= 28, avg=25.60, stdev= 1.67, samples=5 00:10:34.697 lat (usec) : 500=2.82%, 750=1.41% 00:10:34.697 lat (msec) : 50=94.37% 00:10:34.697 cpu : usr=0.11%, sys=0.00%, ctx=71, majf=0, minf=2 00:10:34.697 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:34.697 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:34.697 issued rwts: total=71,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:34.697 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:34.697 00:10:34.697 Run status group 0 (all jobs): 00:10:34.697 READ: bw=28.2MiB/s (29.6MB/s), 101KiB/s-17.3MiB/s (103kB/s-18.1MB/s), io=96.1MiB (101MB), run=2758-3408msec 00:10:34.697 00:10:34.697 Disk stats (read/write): 00:10:34.697 nvme0n1: ios=11042/0, merge=0/0, ticks=3847/0, in_queue=3847, util=98.09% 00:10:34.697 nvme0n2: ios=85/0, merge=0/0, ticks=3326/0, in_queue=3326, util=96.18% 00:10:34.697 nvme0n3: ios=12734/0, merge=0/0, ticks=3255/0, in_queue=3255, util=99.09% 00:10:34.697 nvme0n4: ios=66/0, merge=0/0, ticks=2573/0, in_queue=2573, util=96.45% 00:10:34.955 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:34.955 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:35.214 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.214 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:35.472 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.472 06:16:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:35.472 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:35.472 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:35.731 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:35.731 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 857340 00:10:35.731 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:35.731 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:35.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:35.989 nvmf hotplug test: fio failed as expected 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.989 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.989 rmmod nvme_tcp 00:10:36.249 rmmod nvme_fabrics 00:10:36.249 rmmod nvme_keyring 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 854475 ']' 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 854475 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 854475 ']' 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 854475 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 854475 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 854475' 00:10:36.249 killing process with pid 854475 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 854475 00:10:36.249 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 854475 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:36.508 06:16:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.417 06:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:38.417 00:10:38.417 real 0m26.996s 00:10:38.417 user 1m48.050s 00:10:38.417 sys 0m8.672s 00:10:38.417 06:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:38.417 06:16:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:38.417 ************************************ 00:10:38.417 END TEST nvmf_fio_target 00:10:38.417 ************************************ 00:10:38.418 06:16:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.418 06:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:38.418 06:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:38.418 06:16:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:38.677 ************************************ 00:10:38.677 START TEST nvmf_bdevio 00:10:38.677 ************************************ 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:38.677 * Looking for test storage... 00:10:38.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:38.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.677 --rc genhtml_branch_coverage=1 00:10:38.677 --rc genhtml_function_coverage=1 00:10:38.677 --rc genhtml_legend=1 00:10:38.677 --rc geninfo_all_blocks=1 00:10:38.677 --rc geninfo_unexecuted_blocks=1 00:10:38.677 00:10:38.677 ' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:38.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.677 --rc genhtml_branch_coverage=1 00:10:38.677 --rc genhtml_function_coverage=1 00:10:38.677 --rc genhtml_legend=1 00:10:38.677 --rc geninfo_all_blocks=1 00:10:38.677 --rc geninfo_unexecuted_blocks=1 00:10:38.677 00:10:38.677 ' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:38.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.677 --rc genhtml_branch_coverage=1 00:10:38.677 --rc genhtml_function_coverage=1 00:10:38.677 --rc genhtml_legend=1 00:10:38.677 --rc geninfo_all_blocks=1 00:10:38.677 --rc geninfo_unexecuted_blocks=1 00:10:38.677 00:10:38.677 ' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:38.677 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:38.677 --rc genhtml_branch_coverage=1 00:10:38.677 --rc genhtml_function_coverage=1 00:10:38.677 --rc genhtml_legend=1 00:10:38.677 --rc geninfo_all_blocks=1 00:10:38.677 --rc geninfo_unexecuted_blocks=1 00:10:38.677 00:10:38.677 ' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.677 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:38.678 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:38.678 06:16:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:45.406 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:45.407 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:45.407 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:45.407 Found net devices under 0000:af:00.0: cvl_0_0 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:45.407 Found net devices under 0000:af:00.1: cvl_0_1 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:45.407 06:16:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:45.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:45.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.431 ms 00:10:45.407 00:10:45.407 --- 10.0.0.2 ping statistics --- 00:10:45.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.407 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:45.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:45.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:10:45.407 00:10:45.407 --- 10.0.0.1 ping statistics --- 00:10:45.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:45.407 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=861883 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 861883 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 861883 ']' 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.407 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.407 [2024-12-13 06:16:36.259445] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:45.407 [2024-12-13 06:16:36.259505] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.407 [2024-12-13 06:16:36.337380] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:45.408 [2024-12-13 06:16:36.360379] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:45.408 [2024-12-13 06:16:36.360419] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:45.408 [2024-12-13 06:16:36.360427] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:45.408 [2024-12-13 06:16:36.360432] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:45.408 [2024-12-13 06:16:36.360437] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:45.408 [2024-12-13 06:16:36.361957] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:45.408 [2024-12-13 06:16:36.362070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:45.408 [2024-12-13 06:16:36.362177] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:45.408 [2024-12-13 06:16:36.362179] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 [2024-12-13 06:16:36.494140] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 Malloc0 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:45.408 [2024-12-13 06:16:36.561037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:45.408 { 00:10:45.408 "params": { 00:10:45.408 "name": "Nvme$subsystem", 00:10:45.408 "trtype": "$TEST_TRANSPORT", 00:10:45.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:45.408 "adrfam": "ipv4", 00:10:45.408 "trsvcid": "$NVMF_PORT", 00:10:45.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:45.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:45.408 "hdgst": ${hdgst:-false}, 00:10:45.408 "ddgst": ${ddgst:-false} 00:10:45.408 }, 00:10:45.408 "method": "bdev_nvme_attach_controller" 00:10:45.408 } 00:10:45.408 EOF 00:10:45.408 )") 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:45.408 06:16:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:45.408 "params": { 00:10:45.408 "name": "Nvme1", 00:10:45.408 "trtype": "tcp", 00:10:45.408 "traddr": "10.0.0.2", 00:10:45.408 "adrfam": "ipv4", 00:10:45.408 "trsvcid": "4420", 00:10:45.408 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:45.408 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:45.408 "hdgst": false, 00:10:45.408 "ddgst": false 00:10:45.408 }, 00:10:45.408 "method": "bdev_nvme_attach_controller" 00:10:45.408 }' 00:10:45.408 [2024-12-13 06:16:36.610185] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:10:45.408 [2024-12-13 06:16:36.610224] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid861907 ] 00:10:45.408 [2024-12-13 06:16:36.683224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:45.408 [2024-12-13 06:16:36.708055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:45.408 [2024-12-13 06:16:36.708164] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.408 [2024-12-13 06:16:36.708165] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:45.408 I/O targets: 00:10:45.408 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:45.408 00:10:45.408 00:10:45.408 CUnit - A unit testing framework for C - Version 2.1-3 00:10:45.408 http://cunit.sourceforge.net/ 00:10:45.408 00:10:45.408 00:10:45.408 Suite: bdevio tests on: Nvme1n1 00:10:45.665 Test: blockdev write read block ...passed 00:10:45.665 Test: blockdev write zeroes read block ...passed 00:10:45.665 Test: blockdev write zeroes read no split ...passed 00:10:45.665 Test: blockdev write zeroes read split ...passed 00:10:45.665 Test: blockdev write zeroes read split partial ...passed 00:10:45.665 Test: blockdev reset ...[2024-12-13 06:16:37.215080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:45.665 [2024-12-13 06:16:37.215141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8b6340 (9): Bad file descriptor 00:10:45.665 [2024-12-13 06:16:37.229148] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:45.665 passed 00:10:45.665 Test: blockdev write read 8 blocks ...passed 00:10:45.665 Test: blockdev write read size > 128k ...passed 00:10:45.665 Test: blockdev write read invalid size ...passed 00:10:45.665 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:45.665 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:45.665 Test: blockdev write read max offset ...passed 00:10:45.923 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:45.923 Test: blockdev writev readv 8 blocks ...passed 00:10:45.923 Test: blockdev writev readv 30 x 1block ...passed 00:10:45.923 Test: blockdev writev readv block ...passed 00:10:45.923 Test: blockdev writev readv size > 128k ...passed 00:10:45.923 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:45.923 Test: blockdev comparev and writev ...[2024-12-13 06:16:37.443100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.443946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:45.923 [2024-12-13 06:16:37.443953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:45.923 passed 00:10:45.923 Test: blockdev nvme passthru rw ...passed 00:10:45.923 Test: blockdev nvme passthru vendor specific ...[2024-12-13 06:16:37.526828] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.923 [2024-12-13 06:16:37.526843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.526945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.923 [2024-12-13 06:16:37.526955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.527052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.923 [2024-12-13 06:16:37.527063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:45.923 [2024-12-13 06:16:37.527162] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:45.923 [2024-12-13 06:16:37.527171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:45.923 passed 00:10:45.923 Test: blockdev nvme admin passthru ...passed 00:10:46.181 Test: blockdev copy ...passed 00:10:46.181 00:10:46.181 Run Summary: Type Total Ran Passed Failed Inactive 00:10:46.181 suites 1 1 n/a 0 0 00:10:46.181 tests 23 23 23 0 0 00:10:46.181 asserts 152 152 152 0 n/a 00:10:46.181 00:10:46.181 Elapsed time = 1.064 seconds 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:46.181 rmmod nvme_tcp 00:10:46.181 rmmod nvme_fabrics 00:10:46.181 rmmod nvme_keyring 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 861883 ']' 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 861883 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 861883 ']' 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 861883 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:46.181 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 861883 00:10:46.440 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:10:46.440 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:10:46.440 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 861883' 00:10:46.440 killing process with pid 861883 00:10:46.440 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 861883 00:10:46.440 06:16:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 861883 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:46.440 06:16:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:48.976 00:10:48.976 real 0m10.023s 00:10:48.976 user 0m10.759s 00:10:48.976 sys 0m4.934s 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 ************************************ 00:10:48.976 END TEST nvmf_bdevio 00:10:48.976 ************************************ 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:10:48.976 00:10:48.976 real 4m33.784s 00:10:48.976 user 10m16.936s 00:10:48.976 sys 1m36.555s 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 ************************************ 00:10:48.976 END TEST nvmf_target_core 00:10:48.976 ************************************ 00:10:48.976 06:16:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:48.976 06:16:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.976 06:16:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.976 06:16:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:48.976 ************************************ 00:10:48.976 START TEST nvmf_target_extra 00:10:48.976 ************************************ 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:48.976 * Looking for test storage... 00:10:48.976 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.976 --rc genhtml_branch_coverage=1 00:10:48.976 --rc genhtml_function_coverage=1 00:10:48.976 --rc genhtml_legend=1 00:10:48.976 --rc geninfo_all_blocks=1 00:10:48.976 --rc geninfo_unexecuted_blocks=1 00:10:48.976 00:10:48.976 ' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.976 --rc genhtml_branch_coverage=1 00:10:48.976 --rc genhtml_function_coverage=1 00:10:48.976 --rc genhtml_legend=1 00:10:48.976 --rc geninfo_all_blocks=1 00:10:48.976 --rc geninfo_unexecuted_blocks=1 00:10:48.976 00:10:48.976 ' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.976 --rc genhtml_branch_coverage=1 00:10:48.976 --rc genhtml_function_coverage=1 00:10:48.976 --rc genhtml_legend=1 00:10:48.976 --rc geninfo_all_blocks=1 00:10:48.976 --rc geninfo_unexecuted_blocks=1 00:10:48.976 00:10:48.976 ' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.976 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.976 --rc genhtml_branch_coverage=1 00:10:48.976 --rc genhtml_function_coverage=1 00:10:48.976 --rc genhtml_legend=1 00:10:48.976 --rc geninfo_all_blocks=1 00:10:48.976 --rc geninfo_unexecuted_blocks=1 00:10:48.976 00:10:48.976 ' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:48.976 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:48.977 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:48.977 ************************************ 00:10:48.977 START TEST nvmf_example 00:10:48.977 ************************************ 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:48.977 * Looking for test storage... 00:10:48.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.977 --rc genhtml_branch_coverage=1 00:10:48.977 --rc genhtml_function_coverage=1 00:10:48.977 --rc genhtml_legend=1 00:10:48.977 --rc geninfo_all_blocks=1 00:10:48.977 --rc geninfo_unexecuted_blocks=1 00:10:48.977 00:10:48.977 ' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.977 --rc genhtml_branch_coverage=1 00:10:48.977 --rc genhtml_function_coverage=1 00:10:48.977 --rc genhtml_legend=1 00:10:48.977 --rc geninfo_all_blocks=1 00:10:48.977 --rc geninfo_unexecuted_blocks=1 00:10:48.977 00:10:48.977 ' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.977 --rc genhtml_branch_coverage=1 00:10:48.977 --rc genhtml_function_coverage=1 00:10:48.977 --rc genhtml_legend=1 00:10:48.977 --rc geninfo_all_blocks=1 00:10:48.977 --rc geninfo_unexecuted_blocks=1 00:10:48.977 00:10:48.977 ' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:48.977 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:48.977 --rc genhtml_branch_coverage=1 00:10:48.977 --rc genhtml_function_coverage=1 00:10:48.977 --rc genhtml_legend=1 00:10:48.977 --rc geninfo_all_blocks=1 00:10:48.977 --rc geninfo_unexecuted_blocks=1 00:10:48.977 00:10:48.977 ' 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:48.977 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:49.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:10:49.237 06:16:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.806 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:55.806 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:10:55.806 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:55.806 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:55.807 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:55.807 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:55.807 Found net devices under 0000:af:00.0: cvl_0_0 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:55.807 Found net devices under 0000:af:00.1: cvl_0_1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:55.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:55.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:10:55.807 00:10:55.807 --- 10.0.0.2 ping statistics --- 00:10:55.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.807 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:55.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:55.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:10:55.807 00:10:55.807 --- 10.0.0.1 ping statistics --- 00:10:55.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:55.807 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:55.807 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=865808 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 865808 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 865808 ']' 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:55.808 06:16:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:56.065 06:16:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:08.248 Initializing NVMe Controllers 00:11:08.248 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:08.248 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:08.248 Initialization complete. Launching workers. 00:11:08.248 ======================================================== 00:11:08.248 Latency(us) 00:11:08.248 Device Information : IOPS MiB/s Average min max 00:11:08.248 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18764.81 73.30 3409.99 544.72 16185.07 00:11:08.248 ======================================================== 00:11:08.248 Total : 18764.81 73.30 3409.99 544.72 16185.07 00:11:08.248 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:08.248 rmmod nvme_tcp 00:11:08.248 rmmod nvme_fabrics 00:11:08.248 rmmod nvme_keyring 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 865808 ']' 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 865808 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 865808 ']' 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 865808 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 865808 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 865808' 00:11:08.248 killing process with pid 865808 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 865808 00:11:08.248 06:16:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 865808 00:11:08.248 nvmf threads initialize successfully 00:11:08.248 bdev subsystem init successfully 00:11:08.248 created a nvmf target service 00:11:08.248 create targets's poll groups done 00:11:08.248 all subsystems of target started 00:11:08.248 nvmf target is running 00:11:08.248 all subsystems of target stopped 00:11:08.248 destroy targets's poll groups done 00:11:08.248 destroyed the nvmf target service 00:11:08.248 bdev subsystem finish successfully 00:11:08.248 nvmf threads destroy successfully 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.248 06:16:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.816 00:11:08.816 real 0m19.844s 00:11:08.816 user 0m46.254s 00:11:08.816 sys 0m5.937s 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:08.816 ************************************ 00:11:08.816 END TEST nvmf_example 00:11:08.816 ************************************ 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:08.816 06:17:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:08.816 ************************************ 00:11:08.816 START TEST nvmf_filesystem 00:11:08.816 ************************************ 00:11:08.817 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:08.817 * Looking for test storage... 00:11:08.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:08.817 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:08.817 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:08.817 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.077 --rc genhtml_branch_coverage=1 00:11:09.077 --rc genhtml_function_coverage=1 00:11:09.077 --rc genhtml_legend=1 00:11:09.077 --rc geninfo_all_blocks=1 00:11:09.077 --rc geninfo_unexecuted_blocks=1 00:11:09.077 00:11:09.077 ' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.077 --rc genhtml_branch_coverage=1 00:11:09.077 --rc genhtml_function_coverage=1 00:11:09.077 --rc genhtml_legend=1 00:11:09.077 --rc geninfo_all_blocks=1 00:11:09.077 --rc geninfo_unexecuted_blocks=1 00:11:09.077 00:11:09.077 ' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.077 --rc genhtml_branch_coverage=1 00:11:09.077 --rc genhtml_function_coverage=1 00:11:09.077 --rc genhtml_legend=1 00:11:09.077 --rc geninfo_all_blocks=1 00:11:09.077 --rc geninfo_unexecuted_blocks=1 00:11:09.077 00:11:09.077 ' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.077 --rc genhtml_branch_coverage=1 00:11:09.077 --rc genhtml_function_coverage=1 00:11:09.077 --rc genhtml_legend=1 00:11:09.077 --rc geninfo_all_blocks=1 00:11:09.077 --rc geninfo_unexecuted_blocks=1 00:11:09.077 00:11:09.077 ' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:09.077 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:09.078 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:09.078 #define SPDK_CONFIG_H 00:11:09.078 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:09.078 #define SPDK_CONFIG_APPS 1 00:11:09.078 #define SPDK_CONFIG_ARCH native 00:11:09.078 #undef SPDK_CONFIG_ASAN 00:11:09.078 #undef SPDK_CONFIG_AVAHI 00:11:09.079 #undef SPDK_CONFIG_CET 00:11:09.079 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:09.079 #define SPDK_CONFIG_COVERAGE 1 00:11:09.079 #define SPDK_CONFIG_CROSS_PREFIX 00:11:09.079 #undef SPDK_CONFIG_CRYPTO 00:11:09.079 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:09.079 #undef SPDK_CONFIG_CUSTOMOCF 00:11:09.079 #undef SPDK_CONFIG_DAOS 00:11:09.079 #define SPDK_CONFIG_DAOS_DIR 00:11:09.079 #define SPDK_CONFIG_DEBUG 1 00:11:09.079 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:09.079 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:09.079 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:09.079 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:09.079 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:09.079 #undef SPDK_CONFIG_DPDK_UADK 00:11:09.079 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:09.079 #define SPDK_CONFIG_EXAMPLES 1 00:11:09.079 #undef SPDK_CONFIG_FC 00:11:09.079 #define SPDK_CONFIG_FC_PATH 00:11:09.079 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:09.079 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:09.079 #define SPDK_CONFIG_FSDEV 1 00:11:09.079 #undef SPDK_CONFIG_FUSE 00:11:09.079 #undef SPDK_CONFIG_FUZZER 00:11:09.079 #define SPDK_CONFIG_FUZZER_LIB 00:11:09.079 #undef SPDK_CONFIG_GOLANG 00:11:09.079 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:09.079 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:09.079 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:09.079 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:09.079 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:09.079 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:09.079 #undef SPDK_CONFIG_HAVE_LZ4 00:11:09.079 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:09.079 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:09.079 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:09.079 #define SPDK_CONFIG_IDXD 1 00:11:09.079 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:09.079 #undef SPDK_CONFIG_IPSEC_MB 00:11:09.079 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:09.079 #define SPDK_CONFIG_ISAL 1 00:11:09.079 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:09.079 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:09.079 #define SPDK_CONFIG_LIBDIR 00:11:09.079 #undef SPDK_CONFIG_LTO 00:11:09.079 #define SPDK_CONFIG_MAX_LCORES 128 00:11:09.079 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:09.079 #define SPDK_CONFIG_NVME_CUSE 1 00:11:09.079 #undef SPDK_CONFIG_OCF 00:11:09.079 #define SPDK_CONFIG_OCF_PATH 00:11:09.079 #define SPDK_CONFIG_OPENSSL_PATH 00:11:09.079 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:09.079 #define SPDK_CONFIG_PGO_DIR 00:11:09.079 #undef SPDK_CONFIG_PGO_USE 00:11:09.079 #define SPDK_CONFIG_PREFIX /usr/local 00:11:09.079 #undef SPDK_CONFIG_RAID5F 00:11:09.079 #undef SPDK_CONFIG_RBD 00:11:09.079 #define SPDK_CONFIG_RDMA 1 00:11:09.079 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:09.079 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:09.079 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:09.079 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:09.079 #define SPDK_CONFIG_SHARED 1 00:11:09.079 #undef SPDK_CONFIG_SMA 00:11:09.079 #define SPDK_CONFIG_TESTS 1 00:11:09.079 #undef SPDK_CONFIG_TSAN 00:11:09.079 #define SPDK_CONFIG_UBLK 1 00:11:09.079 #define SPDK_CONFIG_UBSAN 1 00:11:09.079 #undef SPDK_CONFIG_UNIT_TESTS 00:11:09.079 #undef SPDK_CONFIG_URING 00:11:09.079 #define SPDK_CONFIG_URING_PATH 00:11:09.079 #undef SPDK_CONFIG_URING_ZNS 00:11:09.079 #undef SPDK_CONFIG_USDT 00:11:09.079 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:09.079 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:09.079 #define SPDK_CONFIG_VFIO_USER 1 00:11:09.079 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:09.079 #define SPDK_CONFIG_VHOST 1 00:11:09.079 #define SPDK_CONFIG_VIRTIO 1 00:11:09.079 #undef SPDK_CONFIG_VTUNE 00:11:09.079 #define SPDK_CONFIG_VTUNE_DIR 00:11:09.079 #define SPDK_CONFIG_WERROR 1 00:11:09.079 #define SPDK_CONFIG_WPDK_DIR 00:11:09.079 #undef SPDK_CONFIG_XNVME 00:11:09.079 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:09.079 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v23.11 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:09.080 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:09.081 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 868179 ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 868179 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.d8ZmX4 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.d8ZmX4/tests/target /tmp/spdk.d8ZmX4 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88107126784 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=7445278720 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775858688 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=344064 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:09.082 * Looking for test storage... 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88107126784 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9659871232 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:09.082 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:09.083 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:09.083 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:09.083 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:09.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.342 --rc genhtml_branch_coverage=1 00:11:09.342 --rc genhtml_function_coverage=1 00:11:09.342 --rc genhtml_legend=1 00:11:09.342 --rc geninfo_all_blocks=1 00:11:09.342 --rc geninfo_unexecuted_blocks=1 00:11:09.342 00:11:09.342 ' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:09.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.342 --rc genhtml_branch_coverage=1 00:11:09.342 --rc genhtml_function_coverage=1 00:11:09.342 --rc genhtml_legend=1 00:11:09.342 --rc geninfo_all_blocks=1 00:11:09.342 --rc geninfo_unexecuted_blocks=1 00:11:09.342 00:11:09.342 ' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:09.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.342 --rc genhtml_branch_coverage=1 00:11:09.342 --rc genhtml_function_coverage=1 00:11:09.342 --rc genhtml_legend=1 00:11:09.342 --rc geninfo_all_blocks=1 00:11:09.342 --rc geninfo_unexecuted_blocks=1 00:11:09.342 00:11:09.342 ' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:09.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:09.342 --rc genhtml_branch_coverage=1 00:11:09.342 --rc genhtml_function_coverage=1 00:11:09.342 --rc genhtml_legend=1 00:11:09.342 --rc geninfo_all_blocks=1 00:11:09.342 --rc geninfo_unexecuted_blocks=1 00:11:09.342 00:11:09.342 ' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:09.342 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:09.343 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:09.343 06:17:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:15.915 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:15.915 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:15.915 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:15.916 Found net devices under 0000:af:00.0: cvl_0_0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:15.916 Found net devices under 0000:af:00.1: cvl_0_1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:15.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:15.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.400 ms 00:11:15.916 00:11:15.916 --- 10.0.0.2 ping statistics --- 00:11:15.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.916 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:15.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:15.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.223 ms 00:11:15.916 00:11:15.916 --- 10.0.0.1 ping statistics --- 00:11:15.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:15.916 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 ************************************ 00:11:15.916 START TEST nvmf_filesystem_no_in_capsule 00:11:15.916 ************************************ 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=871204 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 871204 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 871204 ']' 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.916 06:17:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.916 [2024-12-13 06:17:06.871493] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:15.916 [2024-12-13 06:17:06.871533] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.916 [2024-12-13 06:17:06.949743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:15.917 [2024-12-13 06:17:06.972335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.917 [2024-12-13 06:17:06.972375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.917 [2024-12-13 06:17:06.972381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.917 [2024-12-13 06:17:06.972388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.917 [2024-12-13 06:17:06.972393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.917 [2024-12-13 06:17:06.973745] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:15.917 [2024-12-13 06:17:06.973862] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:15.917 [2024-12-13 06:17:06.973967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.917 [2024-12-13 06:17:06.973969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 [2024-12-13 06:17:07.113910] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 [2024-12-13 06:17:07.279622] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:15.917 { 00:11:15.917 "name": "Malloc1", 00:11:15.917 "aliases": [ 00:11:15.917 "3ba41c50-e5ea-493f-82be-b27e860843d3" 00:11:15.917 ], 00:11:15.917 "product_name": "Malloc disk", 00:11:15.917 "block_size": 512, 00:11:15.917 "num_blocks": 1048576, 00:11:15.917 "uuid": "3ba41c50-e5ea-493f-82be-b27e860843d3", 00:11:15.917 "assigned_rate_limits": { 00:11:15.917 "rw_ios_per_sec": 0, 00:11:15.917 "rw_mbytes_per_sec": 0, 00:11:15.917 "r_mbytes_per_sec": 0, 00:11:15.917 "w_mbytes_per_sec": 0 00:11:15.917 }, 00:11:15.917 "claimed": true, 00:11:15.917 "claim_type": "exclusive_write", 00:11:15.917 "zoned": false, 00:11:15.917 "supported_io_types": { 00:11:15.917 "read": true, 00:11:15.917 "write": true, 00:11:15.917 "unmap": true, 00:11:15.917 "flush": true, 00:11:15.917 "reset": true, 00:11:15.917 "nvme_admin": false, 00:11:15.917 "nvme_io": false, 00:11:15.917 "nvme_io_md": false, 00:11:15.917 "write_zeroes": true, 00:11:15.917 "zcopy": true, 00:11:15.917 "get_zone_info": false, 00:11:15.917 "zone_management": false, 00:11:15.917 "zone_append": false, 00:11:15.917 "compare": false, 00:11:15.917 "compare_and_write": false, 00:11:15.917 "abort": true, 00:11:15.917 "seek_hole": false, 00:11:15.917 "seek_data": false, 00:11:15.917 "copy": true, 00:11:15.917 "nvme_iov_md": false 00:11:15.917 }, 00:11:15.917 "memory_domains": [ 00:11:15.917 { 00:11:15.917 "dma_device_id": "system", 00:11:15.917 "dma_device_type": 1 00:11:15.917 }, 00:11:15.917 { 00:11:15.917 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.917 "dma_device_type": 2 00:11:15.917 } 00:11:15.917 ], 00:11:15.917 "driver_specific": {} 00:11:15.917 } 00:11:15.917 ]' 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:15.917 06:17:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:17.286 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:17.286 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:17.286 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:17.286 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:17.286 06:17:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:19.180 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:19.437 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:19.437 06:17:10 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:20.368 ************************************ 00:11:20.368 START TEST filesystem_ext4 00:11:20.368 ************************************ 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:20.368 06:17:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:20.368 mke2fs 1.47.0 (5-Feb-2023) 00:11:20.625 Discarding device blocks: 0/522240 done 00:11:20.625 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:20.625 Filesystem UUID: 1d1799da-9ca3-4263-8d9b-c910009e7473 00:11:20.625 Superblock backups stored on blocks: 00:11:20.625 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:20.625 00:11:20.625 Allocating group tables: 0/64 done 00:11:20.625 Writing inode tables: 0/64 done 00:11:20.625 Creating journal (8192 blocks): done 00:11:20.625 Writing superblocks and filesystem accounting information: 0/64 done 00:11:20.625 00:11:20.625 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:20.625 06:17:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:25.878 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:25.878 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:25.878 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:25.878 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 871204 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:26.136 00:11:26.136 real 0m5.596s 00:11:26.136 user 0m0.026s 00:11:26.136 sys 0m0.072s 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:26.136 ************************************ 00:11:26.136 END TEST filesystem_ext4 00:11:26.136 ************************************ 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:26.136 ************************************ 00:11:26.136 START TEST filesystem_btrfs 00:11:26.136 ************************************ 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:26.136 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:26.393 btrfs-progs v6.8.1 00:11:26.393 See https://btrfs.readthedocs.io for more information. 00:11:26.393 00:11:26.393 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:26.393 NOTE: several default settings have changed in version 5.15, please make sure 00:11:26.393 this does not affect your deployments: 00:11:26.393 - DUP for metadata (-m dup) 00:11:26.393 - enabled no-holes (-O no-holes) 00:11:26.393 - enabled free-space-tree (-R free-space-tree) 00:11:26.393 00:11:26.393 Label: (null) 00:11:26.393 UUID: cca031df-0228-4b8c-a978-b7357256915f 00:11:26.393 Node size: 16384 00:11:26.393 Sector size: 4096 (CPU page size: 4096) 00:11:26.393 Filesystem size: 510.00MiB 00:11:26.393 Block group profiles: 00:11:26.393 Data: single 8.00MiB 00:11:26.393 Metadata: DUP 32.00MiB 00:11:26.393 System: DUP 8.00MiB 00:11:26.393 SSD detected: yes 00:11:26.393 Zoned device: no 00:11:26.393 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:26.393 Checksum: crc32c 00:11:26.393 Number of devices: 1 00:11:26.393 Devices: 00:11:26.393 ID SIZE PATH 00:11:26.393 1 510.00MiB /dev/nvme0n1p1 00:11:26.393 00:11:26.393 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:26.393 06:17:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:26.957 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 871204 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:27.214 00:11:27.214 real 0m0.999s 00:11:27.214 user 0m0.024s 00:11:27.214 sys 0m0.121s 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 ************************************ 00:11:27.214 END TEST filesystem_btrfs 00:11:27.214 ************************************ 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:27.214 ************************************ 00:11:27.214 START TEST filesystem_xfs 00:11:27.214 ************************************ 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:27.214 06:17:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:27.214 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:27.214 = sectsz=512 attr=2, projid32bit=1 00:11:27.214 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:27.214 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:27.214 data = bsize=4096 blocks=130560, imaxpct=25 00:11:27.214 = sunit=0 swidth=0 blks 00:11:27.214 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:27.214 log =internal log bsize=4096 blocks=16384, version=2 00:11:27.214 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:27.214 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:28.145 Discarding blocks...Done. 00:11:28.145 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:28.145 06:17:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:30.039 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 871204 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:30.296 00:11:30.296 real 0m3.035s 00:11:30.296 user 0m0.018s 00:11:30.296 sys 0m0.080s 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:30.296 ************************************ 00:11:30.296 END TEST filesystem_xfs 00:11:30.296 ************************************ 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:30.296 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:30.553 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.553 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:30.553 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:30.553 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:30.553 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.553 06:17:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 871204 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 871204 ']' 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 871204 00:11:30.553 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 871204 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 871204' 00:11:30.554 killing process with pid 871204 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 871204 00:11:30.554 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 871204 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:30.812 00:11:30.812 real 0m15.580s 00:11:30.812 user 1m1.287s 00:11:30.812 sys 0m1.376s 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:30.812 ************************************ 00:11:30.812 END TEST nvmf_filesystem_no_in_capsule 00:11:30.812 ************************************ 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:30.812 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:31.070 ************************************ 00:11:31.070 START TEST nvmf_filesystem_in_capsule 00:11:31.070 ************************************ 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=874061 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 874061 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 874061 ']' 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:31.070 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:31.070 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.070 [2024-12-13 06:17:22.521856] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:11:31.070 [2024-12-13 06:17:22.521898] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:31.070 [2024-12-13 06:17:22.603679] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:31.070 [2024-12-13 06:17:22.625704] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:31.070 [2024-12-13 06:17:22.625749] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:31.070 [2024-12-13 06:17:22.625756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:31.070 [2024-12-13 06:17:22.625761] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:31.070 [2024-12-13 06:17:22.625766] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:31.070 [2024-12-13 06:17:22.627194] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:31.070 [2024-12-13 06:17:22.627304] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:31.070 [2024-12-13 06:17:22.627411] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.070 [2024-12-13 06:17:22.627412] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 [2024-12-13 06:17:22.766825] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 [2024-12-13 06:17:22.918606] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.328 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:31.328 { 00:11:31.328 "name": "Malloc1", 00:11:31.328 "aliases": [ 00:11:31.328 "bea9c49a-1642-47e4-b9d9-8d9369002654" 00:11:31.328 ], 00:11:31.328 "product_name": "Malloc disk", 00:11:31.328 "block_size": 512, 00:11:31.328 "num_blocks": 1048576, 00:11:31.328 "uuid": "bea9c49a-1642-47e4-b9d9-8d9369002654", 00:11:31.328 "assigned_rate_limits": { 00:11:31.328 "rw_ios_per_sec": 0, 00:11:31.328 "rw_mbytes_per_sec": 0, 00:11:31.328 "r_mbytes_per_sec": 0, 00:11:31.328 "w_mbytes_per_sec": 0 00:11:31.328 }, 00:11:31.328 "claimed": true, 00:11:31.328 "claim_type": "exclusive_write", 00:11:31.328 "zoned": false, 00:11:31.328 "supported_io_types": { 00:11:31.328 "read": true, 00:11:31.328 "write": true, 00:11:31.328 "unmap": true, 00:11:31.328 "flush": true, 00:11:31.328 "reset": true, 00:11:31.328 "nvme_admin": false, 00:11:31.328 "nvme_io": false, 00:11:31.328 "nvme_io_md": false, 00:11:31.328 "write_zeroes": true, 00:11:31.328 "zcopy": true, 00:11:31.328 "get_zone_info": false, 00:11:31.328 "zone_management": false, 00:11:31.328 "zone_append": false, 00:11:31.328 "compare": false, 00:11:31.328 "compare_and_write": false, 00:11:31.328 "abort": true, 00:11:31.329 "seek_hole": false, 00:11:31.329 "seek_data": false, 00:11:31.329 "copy": true, 00:11:31.329 "nvme_iov_md": false 00:11:31.329 }, 00:11:31.329 "memory_domains": [ 00:11:31.329 { 00:11:31.329 "dma_device_id": "system", 00:11:31.329 "dma_device_type": 1 00:11:31.329 }, 00:11:31.329 { 00:11:31.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.329 "dma_device_type": 2 00:11:31.329 } 00:11:31.329 ], 00:11:31.329 "driver_specific": {} 00:11:31.329 } 00:11:31.329 ]' 00:11:31.329 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:31.587 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:31.587 06:17:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:31.587 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:31.587 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:31.587 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:31.587 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:31.587 06:17:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.517 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.517 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.517 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.517 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.517 06:17:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:35.038 06:17:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:36.018 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:36.018 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:36.018 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.018 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.019 ************************************ 00:11:36.019 START TEST filesystem_in_capsule_ext4 00:11:36.019 ************************************ 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:36.019 06:17:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:36.019 mke2fs 1.47.0 (5-Feb-2023) 00:11:36.019 Discarding device blocks: 0/522240 done 00:11:36.320 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:36.320 Filesystem UUID: c6470de8-22ca-483a-8a97-641a90fc9edb 00:11:36.320 Superblock backups stored on blocks: 00:11:36.320 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:36.320 00:11:36.320 Allocating group tables: 0/64 done 00:11:36.320 Writing inode tables: 0/64 done 00:11:38.857 Creating journal (8192 blocks): done 00:11:39.885 Writing superblocks and filesystem accounting information: 0/64 6/64 done 00:11:39.885 00:11:39.885 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:39.885 06:17:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 874061 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.430 06:17:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.430 00:11:46.430 real 0m9.459s 00:11:46.430 user 0m0.027s 00:11:46.430 sys 0m0.076s 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:46.430 ************************************ 00:11:46.430 END TEST filesystem_in_capsule_ext4 00:11:46.430 ************************************ 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.430 ************************************ 00:11:46.430 START TEST filesystem_in_capsule_btrfs 00:11:46.430 ************************************ 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:46.430 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:46.431 btrfs-progs v6.8.1 00:11:46.431 See https://btrfs.readthedocs.io for more information. 00:11:46.431 00:11:46.431 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:46.431 NOTE: several default settings have changed in version 5.15, please make sure 00:11:46.431 this does not affect your deployments: 00:11:46.431 - DUP for metadata (-m dup) 00:11:46.431 - enabled no-holes (-O no-holes) 00:11:46.431 - enabled free-space-tree (-R free-space-tree) 00:11:46.431 00:11:46.431 Label: (null) 00:11:46.431 UUID: 52799953-baae-4a40-bb6c-6046df2ea931 00:11:46.431 Node size: 16384 00:11:46.431 Sector size: 4096 (CPU page size: 4096) 00:11:46.431 Filesystem size: 510.00MiB 00:11:46.431 Block group profiles: 00:11:46.431 Data: single 8.00MiB 00:11:46.431 Metadata: DUP 32.00MiB 00:11:46.431 System: DUP 8.00MiB 00:11:46.431 SSD detected: yes 00:11:46.431 Zoned device: no 00:11:46.431 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:46.431 Checksum: crc32c 00:11:46.431 Number of devices: 1 00:11:46.431 Devices: 00:11:46.431 ID SIZE PATH 00:11:46.431 1 510.00MiB /dev/nvme0n1p1 00:11:46.431 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:46.431 06:17:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 874061 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.688 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.945 00:11:46.945 real 0m1.278s 00:11:46.945 user 0m0.025s 00:11:46.945 sys 0m0.116s 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.945 ************************************ 00:11:46.945 END TEST filesystem_in_capsule_btrfs 00:11:46.945 ************************************ 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:46.945 ************************************ 00:11:46.945 START TEST filesystem_in_capsule_xfs 00:11:46.945 ************************************ 00:11:46.945 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:46.946 06:17:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:46.946 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:46.946 = sectsz=512 attr=2, projid32bit=1 00:11:46.946 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:46.946 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:46.946 data = bsize=4096 blocks=130560, imaxpct=25 00:11:46.946 = sunit=0 swidth=0 blks 00:11:46.946 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:46.946 log =internal log bsize=4096 blocks=16384, version=2 00:11:46.946 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:46.946 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:47.876 Discarding blocks...Done. 00:11:47.876 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:47.876 06:17:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:50.400 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 874061 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:50.658 00:11:50.658 real 0m3.684s 00:11:50.658 user 0m0.023s 00:11:50.658 sys 0m0.077s 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:50.658 ************************************ 00:11:50.658 END TEST filesystem_in_capsule_xfs 00:11:50.658 ************************************ 00:11:50.658 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:50.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 874061 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 874061 ']' 00:11:50.917 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 874061 00:11:50.918 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:50.918 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:50.918 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 874061 00:11:51.176 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:51.176 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:51.176 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 874061' 00:11:51.176 killing process with pid 874061 00:11:51.176 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 874061 00:11:51.176 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 874061 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:51.435 00:11:51.435 real 0m20.418s 00:11:51.435 user 1m20.524s 00:11:51.435 sys 0m1.461s 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:51.435 ************************************ 00:11:51.435 END TEST nvmf_filesystem_in_capsule 00:11:51.435 ************************************ 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:51.435 rmmod nvme_tcp 00:11:51.435 rmmod nvme_fabrics 00:11:51.435 rmmod nvme_keyring 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:51.435 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:51.436 06:17:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:53.972 00:11:53.972 real 0m44.704s 00:11:53.972 user 2m23.859s 00:11:53.972 sys 0m7.511s 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:53.972 ************************************ 00:11:53.972 END TEST nvmf_filesystem 00:11:53.972 ************************************ 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:53.972 ************************************ 00:11:53.972 START TEST nvmf_target_discovery 00:11:53.972 ************************************ 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:53.972 * Looking for test storage... 00:11:53.972 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:53.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.972 --rc genhtml_branch_coverage=1 00:11:53.972 --rc genhtml_function_coverage=1 00:11:53.972 --rc genhtml_legend=1 00:11:53.972 --rc geninfo_all_blocks=1 00:11:53.972 --rc geninfo_unexecuted_blocks=1 00:11:53.972 00:11:53.972 ' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:53.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.972 --rc genhtml_branch_coverage=1 00:11:53.972 --rc genhtml_function_coverage=1 00:11:53.972 --rc genhtml_legend=1 00:11:53.972 --rc geninfo_all_blocks=1 00:11:53.972 --rc geninfo_unexecuted_blocks=1 00:11:53.972 00:11:53.972 ' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:53.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.972 --rc genhtml_branch_coverage=1 00:11:53.972 --rc genhtml_function_coverage=1 00:11:53.972 --rc genhtml_legend=1 00:11:53.972 --rc geninfo_all_blocks=1 00:11:53.972 --rc geninfo_unexecuted_blocks=1 00:11:53.972 00:11:53.972 ' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:53.972 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:53.972 --rc genhtml_branch_coverage=1 00:11:53.972 --rc genhtml_function_coverage=1 00:11:53.972 --rc genhtml_legend=1 00:11:53.972 --rc geninfo_all_blocks=1 00:11:53.972 --rc geninfo_unexecuted_blocks=1 00:11:53.972 00:11:53.972 ' 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.972 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:53.973 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:11:53.973 06:17:45 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:00.546 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:00.546 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:00.546 Found net devices under 0000:af:00.0: cvl_0_0 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:00.546 Found net devices under 0000:af:00.1: cvl_0_1 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:00.546 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:00.547 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:00.547 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:12:00.547 00:12:00.547 --- 10.0.0.2 ping statistics --- 00:12:00.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.547 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:00.547 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:00.547 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:12:00.547 00:12:00.547 --- 10.0.0.1 ping statistics --- 00:12:00.547 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:00.547 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=880964 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 880964 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 880964 ']' 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:00.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 [2024-12-13 06:17:51.367499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:00.547 [2024-12-13 06:17:51.367541] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:00.547 [2024-12-13 06:17:51.449498] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:00.547 [2024-12-13 06:17:51.472287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:00.547 [2024-12-13 06:17:51.472323] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:00.547 [2024-12-13 06:17:51.472331] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:00.547 [2024-12-13 06:17:51.472337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:00.547 [2024-12-13 06:17:51.472342] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:00.547 [2024-12-13 06:17:51.473673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:00.547 [2024-12-13 06:17:51.473693] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:00.547 [2024-12-13 06:17:51.473813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:00.547 [2024-12-13 06:17:51.473814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 [2024-12-13 06:17:51.613820] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 Null1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 [2024-12-13 06:17:51.670613] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.547 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.547 Null2 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 Null3 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 Null4 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:00.548 00:12:00.548 Discovery Log Number of Records 6, Generation counter 6 00:12:00.548 =====Discovery Log Entry 0====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: current discovery subsystem 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4420 00:12:00.548 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: explicit discovery connections, duplicate discovery information 00:12:00.548 sectype: none 00:12:00.548 =====Discovery Log Entry 1====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: nvme subsystem 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4420 00:12:00.548 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: none 00:12:00.548 sectype: none 00:12:00.548 =====Discovery Log Entry 2====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: nvme subsystem 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4420 00:12:00.548 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: none 00:12:00.548 sectype: none 00:12:00.548 =====Discovery Log Entry 3====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: nvme subsystem 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4420 00:12:00.548 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: none 00:12:00.548 sectype: none 00:12:00.548 =====Discovery Log Entry 4====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: nvme subsystem 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4420 00:12:00.548 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: none 00:12:00.548 sectype: none 00:12:00.548 =====Discovery Log Entry 5====== 00:12:00.548 trtype: tcp 00:12:00.548 adrfam: ipv4 00:12:00.548 subtype: discovery subsystem referral 00:12:00.548 treq: not required 00:12:00.548 portid: 0 00:12:00.548 trsvcid: 4430 00:12:00.548 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:00.548 traddr: 10.0.0.2 00:12:00.548 eflags: none 00:12:00.548 sectype: none 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:00.548 Perform nvmf subsystem discovery via RPC 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.548 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.548 [ 00:12:00.548 { 00:12:00.548 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:00.548 "subtype": "Discovery", 00:12:00.548 "listen_addresses": [ 00:12:00.548 { 00:12:00.548 "trtype": "TCP", 00:12:00.548 "adrfam": "IPv4", 00:12:00.548 "traddr": "10.0.0.2", 00:12:00.548 "trsvcid": "4420" 00:12:00.548 } 00:12:00.548 ], 00:12:00.548 "allow_any_host": true, 00:12:00.548 "hosts": [] 00:12:00.548 }, 00:12:00.548 { 00:12:00.548 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:00.548 "subtype": "NVMe", 00:12:00.548 "listen_addresses": [ 00:12:00.548 { 00:12:00.548 "trtype": "TCP", 00:12:00.548 "adrfam": "IPv4", 00:12:00.548 "traddr": "10.0.0.2", 00:12:00.548 "trsvcid": "4420" 00:12:00.548 } 00:12:00.548 ], 00:12:00.548 "allow_any_host": true, 00:12:00.548 "hosts": [], 00:12:00.548 "serial_number": "SPDK00000000000001", 00:12:00.548 "model_number": "SPDK bdev Controller", 00:12:00.548 "max_namespaces": 32, 00:12:00.548 "min_cntlid": 1, 00:12:00.548 "max_cntlid": 65519, 00:12:00.548 "namespaces": [ 00:12:00.548 { 00:12:00.548 "nsid": 1, 00:12:00.548 "bdev_name": "Null1", 00:12:00.548 "name": "Null1", 00:12:00.548 "nguid": "9C92E44701FC442AB54DD4DC5F5A923C", 00:12:00.548 "uuid": "9c92e447-01fc-442a-b54d-d4dc5f5a923c" 00:12:00.548 } 00:12:00.548 ] 00:12:00.548 }, 00:12:00.548 { 00:12:00.548 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:00.548 "subtype": "NVMe", 00:12:00.548 "listen_addresses": [ 00:12:00.548 { 00:12:00.548 "trtype": "TCP", 00:12:00.549 "adrfam": "IPv4", 00:12:00.549 "traddr": "10.0.0.2", 00:12:00.549 "trsvcid": "4420" 00:12:00.549 } 00:12:00.549 ], 00:12:00.549 "allow_any_host": true, 00:12:00.549 "hosts": [], 00:12:00.549 "serial_number": "SPDK00000000000002", 00:12:00.549 "model_number": "SPDK bdev Controller", 00:12:00.549 "max_namespaces": 32, 00:12:00.549 "min_cntlid": 1, 00:12:00.549 "max_cntlid": 65519, 00:12:00.549 "namespaces": [ 00:12:00.549 { 00:12:00.549 "nsid": 1, 00:12:00.549 "bdev_name": "Null2", 00:12:00.549 "name": "Null2", 00:12:00.549 "nguid": "523C9163F61C424EA392A1F560FF0BB2", 00:12:00.549 "uuid": "523c9163-f61c-424e-a392-a1f560ff0bb2" 00:12:00.549 } 00:12:00.549 ] 00:12:00.549 }, 00:12:00.549 { 00:12:00.549 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:00.549 "subtype": "NVMe", 00:12:00.549 "listen_addresses": [ 00:12:00.549 { 00:12:00.549 "trtype": "TCP", 00:12:00.549 "adrfam": "IPv4", 00:12:00.549 "traddr": "10.0.0.2", 00:12:00.549 "trsvcid": "4420" 00:12:00.549 } 00:12:00.549 ], 00:12:00.549 "allow_any_host": true, 00:12:00.549 "hosts": [], 00:12:00.549 "serial_number": "SPDK00000000000003", 00:12:00.549 "model_number": "SPDK bdev Controller", 00:12:00.549 "max_namespaces": 32, 00:12:00.549 "min_cntlid": 1, 00:12:00.549 "max_cntlid": 65519, 00:12:00.549 "namespaces": [ 00:12:00.549 { 00:12:00.549 "nsid": 1, 00:12:00.549 "bdev_name": "Null3", 00:12:00.549 "name": "Null3", 00:12:00.549 "nguid": "00AAB893EA4C40839E2EAAB8DEF03451", 00:12:00.549 "uuid": "00aab893-ea4c-4083-9e2e-aab8def03451" 00:12:00.549 } 00:12:00.549 ] 00:12:00.549 }, 00:12:00.549 { 00:12:00.549 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:00.549 "subtype": "NVMe", 00:12:00.549 "listen_addresses": [ 00:12:00.549 { 00:12:00.549 "trtype": "TCP", 00:12:00.549 "adrfam": "IPv4", 00:12:00.549 "traddr": "10.0.0.2", 00:12:00.549 "trsvcid": "4420" 00:12:00.549 } 00:12:00.549 ], 00:12:00.549 "allow_any_host": true, 00:12:00.549 "hosts": [], 00:12:00.549 "serial_number": "SPDK00000000000004", 00:12:00.549 "model_number": "SPDK bdev Controller", 00:12:00.549 "max_namespaces": 32, 00:12:00.549 "min_cntlid": 1, 00:12:00.549 "max_cntlid": 65519, 00:12:00.549 "namespaces": [ 00:12:00.549 { 00:12:00.549 "nsid": 1, 00:12:00.549 "bdev_name": "Null4", 00:12:00.549 "name": "Null4", 00:12:00.549 "nguid": "CAE423E942F047A1B56C815EE0A36930", 00:12:00.549 "uuid": "cae423e9-42f0-47a1-b56c-815ee0a36930" 00:12:00.549 } 00:12:00.549 ] 00:12:00.549 } 00:12:00.549 ] 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:00.549 rmmod nvme_tcp 00:12:00.549 rmmod nvme_fabrics 00:12:00.549 rmmod nvme_keyring 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 880964 ']' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 880964 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 880964 ']' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 880964 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:00.549 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 880964 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 880964' 00:12:00.809 killing process with pid 880964 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 880964 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 880964 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.809 06:17:52 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:03.345 00:12:03.345 real 0m9.330s 00:12:03.345 user 0m5.588s 00:12:03.345 sys 0m4.807s 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 ************************************ 00:12:03.345 END TEST nvmf_target_discovery 00:12:03.345 ************************************ 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:03.345 ************************************ 00:12:03.345 START TEST nvmf_referrals 00:12:03.345 ************************************ 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:03.345 * Looking for test storage... 00:12:03.345 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:03.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.345 --rc genhtml_branch_coverage=1 00:12:03.345 --rc genhtml_function_coverage=1 00:12:03.345 --rc genhtml_legend=1 00:12:03.345 --rc geninfo_all_blocks=1 00:12:03.345 --rc geninfo_unexecuted_blocks=1 00:12:03.345 00:12:03.345 ' 00:12:03.345 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:03.345 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.345 --rc genhtml_branch_coverage=1 00:12:03.345 --rc genhtml_function_coverage=1 00:12:03.345 --rc genhtml_legend=1 00:12:03.345 --rc geninfo_all_blocks=1 00:12:03.345 --rc geninfo_unexecuted_blocks=1 00:12:03.345 00:12:03.346 ' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:03.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.346 --rc genhtml_branch_coverage=1 00:12:03.346 --rc genhtml_function_coverage=1 00:12:03.346 --rc genhtml_legend=1 00:12:03.346 --rc geninfo_all_blocks=1 00:12:03.346 --rc geninfo_unexecuted_blocks=1 00:12:03.346 00:12:03.346 ' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:03.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:03.346 --rc genhtml_branch_coverage=1 00:12:03.346 --rc genhtml_function_coverage=1 00:12:03.346 --rc genhtml_legend=1 00:12:03.346 --rc geninfo_all_blocks=1 00:12:03.346 --rc geninfo_unexecuted_blocks=1 00:12:03.346 00:12:03.346 ' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:03.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:03.346 06:17:54 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:09.912 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:09.913 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:09.913 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:09.913 Found net devices under 0000:af:00.0: cvl_0_0 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:09.913 Found net devices under 0000:af:00.1: cvl_0_1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:09.913 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:09.913 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.337 ms 00:12:09.913 00:12:09.913 --- 10.0.0.2 ping statistics --- 00:12:09.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.913 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:09.913 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:09.913 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:12:09.913 00:12:09.913 --- 10.0.0.1 ping statistics --- 00:12:09.913 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:09.913 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=884717 00:12:09.913 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 884717 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 884717 ']' 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:09.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 [2024-12-13 06:18:00.682854] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:09.914 [2024-12-13 06:18:00.682899] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:09.914 [2024-12-13 06:18:00.764442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:09.914 [2024-12-13 06:18:00.787310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:09.914 [2024-12-13 06:18:00.787348] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:09.914 [2024-12-13 06:18:00.787355] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:09.914 [2024-12-13 06:18:00.787361] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:09.914 [2024-12-13 06:18:00.787366] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:09.914 [2024-12-13 06:18:00.788804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:09.914 [2024-12-13 06:18:00.788910] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:09.914 [2024-12-13 06:18:00.789016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:09.914 [2024-12-13 06:18:00.789016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 [2024-12-13 06:18:00.921224] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 [2024-12-13 06:18:00.942585] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.914 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:09.915 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.172 06:18:01 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.430 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:10.687 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:10.944 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:11.202 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:11.203 06:18:02 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:11.460 rmmod nvme_tcp 00:12:11.460 rmmod nvme_fabrics 00:12:11.460 rmmod nvme_keyring 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 884717 ']' 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 884717 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 884717 ']' 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 884717 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:11.460 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 884717 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 884717' 00:12:11.719 killing process with pid 884717 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 884717 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 884717 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:11.719 06:18:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:14.287 00:12:14.287 real 0m10.832s 00:12:14.287 user 0m12.539s 00:12:14.287 sys 0m5.108s 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:14.287 ************************************ 00:12:14.287 END TEST nvmf_referrals 00:12:14.287 ************************************ 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:14.287 ************************************ 00:12:14.287 START TEST nvmf_connect_disconnect 00:12:14.287 ************************************ 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:14.287 * Looking for test storage... 00:12:14.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:14.287 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:14.287 --rc genhtml_branch_coverage=1 00:12:14.287 --rc genhtml_function_coverage=1 00:12:14.287 --rc genhtml_legend=1 00:12:14.287 --rc geninfo_all_blocks=1 00:12:14.287 --rc geninfo_unexecuted_blocks=1 00:12:14.287 00:12:14.287 ' 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:14.287 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:14.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:14.288 06:18:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:20.858 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:20.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:20.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:20.859 Found net devices under 0000:af:00.0: cvl_0_0 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:20.859 Found net devices under 0000:af:00.1: cvl_0_1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:20.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.441 ms 00:12:20.859 00:12:20.859 --- 10.0.0.2 ping statistics --- 00:12:20.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.859 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:12:20.859 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:12:20.859 00:12:20.860 --- 10.0.0.1 ping statistics --- 00:12:20.860 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.860 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=889201 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 889201 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 889201 ']' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 [2024-12-13 06:18:11.686235] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:12:20.860 [2024-12-13 06:18:11.686278] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.860 [2024-12-13 06:18:11.747565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:20.860 [2024-12-13 06:18:11.770895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.860 [2024-12-13 06:18:11.770931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.860 [2024-12-13 06:18:11.770938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.860 [2024-12-13 06:18:11.770944] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.860 [2024-12-13 06:18:11.770952] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.860 [2024-12-13 06:18:11.772397] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:20.860 [2024-12-13 06:18:11.772440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:20.860 [2024-12-13 06:18:11.772550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:20.860 [2024-12-13 06:18:11.772551] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 [2024-12-13 06:18:11.916055] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:20.860 [2024-12-13 06:18:11.973833] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:20.860 06:18:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:22.754 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:25.272 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:27.793 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.691 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:32.219 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.120 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.601 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.200 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.600 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:55.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.552 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:02.081 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.036 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:10.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.456 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.980 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:17.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.397 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.021 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.544 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.439 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:33.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.376 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.899 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.795 [2024-12-13 06:19:32.404359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a210 is same with the state(6) to be set 00:13:40.795 [2024-12-13 06:19:32.404429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a210 is same with the state(6) to be set 00:13:40.795 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:47.866 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:54.808 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.333 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:08.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.594 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.117 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:17.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.487 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.010 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:31.425 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:33.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.471 [2024-12-13 06:20:27.686375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a210 is same with the state(6) to be set 00:14:36.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:38.996 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.497 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.442 [2024-12-13 06:20:37.055437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b2a210 is same with the state(6) to be set 00:14:45.442 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.968 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:52.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.920 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:01.872 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.398 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:06.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:08.823 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:11.349 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:13.248 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:15.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.300 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.197 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:29.673 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.199 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:36.625 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.523 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.048 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:43.638 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:45.558 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:52.506 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:59.455 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.983 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.405 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:08.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:10.830 rmmod nvme_tcp 00:16:10.830 rmmod nvme_fabrics 00:16:10.830 rmmod nvme_keyring 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 889201 ']' 00:16:10.830 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 889201 00:16:10.831 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 889201 ']' 00:16:10.831 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 889201 00:16:10.831 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:10.831 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:10.831 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 889201 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 889201' 00:16:11.090 killing process with pid 889201 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 889201 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 889201 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:11.090 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:11.091 06:22:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:13.635 00:16:13.635 real 3m59.329s 00:16:13.635 user 15m14.180s 00:16:13.635 sys 0m24.692s 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:13.635 ************************************ 00:16:13.635 END TEST nvmf_connect_disconnect 00:16:13.635 ************************************ 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:13.635 ************************************ 00:16:13.635 START TEST nvmf_multitarget 00:16:13.635 ************************************ 00:16:13.635 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:13.635 * Looking for test storage... 00:16:13.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:13.636 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:13.636 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:13.636 06:22:04 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.636 --rc genhtml_branch_coverage=1 00:16:13.636 --rc genhtml_function_coverage=1 00:16:13.636 --rc genhtml_legend=1 00:16:13.636 --rc geninfo_all_blocks=1 00:16:13.636 --rc geninfo_unexecuted_blocks=1 00:16:13.636 00:16:13.636 ' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.636 --rc genhtml_branch_coverage=1 00:16:13.636 --rc genhtml_function_coverage=1 00:16:13.636 --rc genhtml_legend=1 00:16:13.636 --rc geninfo_all_blocks=1 00:16:13.636 --rc geninfo_unexecuted_blocks=1 00:16:13.636 00:16:13.636 ' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.636 --rc genhtml_branch_coverage=1 00:16:13.636 --rc genhtml_function_coverage=1 00:16:13.636 --rc genhtml_legend=1 00:16:13.636 --rc geninfo_all_blocks=1 00:16:13.636 --rc geninfo_unexecuted_blocks=1 00:16:13.636 00:16:13.636 ' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:13.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:13.636 --rc genhtml_branch_coverage=1 00:16:13.636 --rc genhtml_function_coverage=1 00:16:13.636 --rc genhtml_legend=1 00:16:13.636 --rc geninfo_all_blocks=1 00:16:13.636 --rc geninfo_unexecuted_blocks=1 00:16:13.636 00:16:13.636 ' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:13.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:13.636 06:22:05 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:20.202 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:20.202 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:20.202 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:20.203 Found net devices under 0000:af:00.0: cvl_0_0 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:20.203 Found net devices under 0000:af:00.1: cvl_0_1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:20.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:20.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.297 ms 00:16:20.203 00:16:20.203 --- 10.0.0.2 ping statistics --- 00:16:20.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.203 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:20.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:20.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:20.203 00:16:20.203 --- 10.0.0.1 ping statistics --- 00:16:20.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:20.203 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=932041 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 932041 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 932041 ']' 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:20.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:20.203 06:22:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 [2024-12-13 06:22:11.032267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:20.203 [2024-12-13 06:22:11.032318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:20.203 [2024-12-13 06:22:11.111850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:20.203 [2024-12-13 06:22:11.135376] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:20.203 [2024-12-13 06:22:11.135414] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:20.203 [2024-12-13 06:22:11.135421] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:20.203 [2024-12-13 06:22:11.135427] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:20.203 [2024-12-13 06:22:11.135431] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:20.203 [2024-12-13 06:22:11.136914] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:20.203 [2024-12-13 06:22:11.137021] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:20.203 [2024-12-13 06:22:11.137127] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:20.203 [2024-12-13 06:22:11.137128] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:20.203 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:20.204 "nvmf_tgt_1" 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:20.204 "nvmf_tgt_2" 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:20.204 true 00:16:20.204 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:20.462 true 00:16:20.462 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:20.462 06:22:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:20.462 rmmod nvme_tcp 00:16:20.462 rmmod nvme_fabrics 00:16:20.462 rmmod nvme_keyring 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 932041 ']' 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 932041 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 932041 ']' 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 932041 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:20.462 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 932041 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 932041' 00:16:20.721 killing process with pid 932041 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 932041 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 932041 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.721 06:22:12 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:23.259 00:16:23.259 real 0m9.525s 00:16:23.259 user 0m7.109s 00:16:23.259 sys 0m4.909s 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:23.259 ************************************ 00:16:23.259 END TEST nvmf_multitarget 00:16:23.259 ************************************ 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:23.259 ************************************ 00:16:23.259 START TEST nvmf_rpc 00:16:23.259 ************************************ 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:23.259 * Looking for test storage... 00:16:23.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:23.259 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.260 --rc genhtml_branch_coverage=1 00:16:23.260 --rc genhtml_function_coverage=1 00:16:23.260 --rc genhtml_legend=1 00:16:23.260 --rc geninfo_all_blocks=1 00:16:23.260 --rc geninfo_unexecuted_blocks=1 00:16:23.260 00:16:23.260 ' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.260 --rc genhtml_branch_coverage=1 00:16:23.260 --rc genhtml_function_coverage=1 00:16:23.260 --rc genhtml_legend=1 00:16:23.260 --rc geninfo_all_blocks=1 00:16:23.260 --rc geninfo_unexecuted_blocks=1 00:16:23.260 00:16:23.260 ' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.260 --rc genhtml_branch_coverage=1 00:16:23.260 --rc genhtml_function_coverage=1 00:16:23.260 --rc genhtml_legend=1 00:16:23.260 --rc geninfo_all_blocks=1 00:16:23.260 --rc geninfo_unexecuted_blocks=1 00:16:23.260 00:16:23.260 ' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:23.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:23.260 --rc genhtml_branch_coverage=1 00:16:23.260 --rc genhtml_function_coverage=1 00:16:23.260 --rc genhtml_legend=1 00:16:23.260 --rc geninfo_all_blocks=1 00:16:23.260 --rc geninfo_unexecuted_blocks=1 00:16:23.260 00:16:23.260 ' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:23.260 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:23.260 06:22:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:29.830 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:29.830 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:29.830 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:29.831 Found net devices under 0000:af:00.0: cvl_0_0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:29.831 Found net devices under 0000:af:00.1: cvl_0_1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:29.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:29.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:16:29.831 00:16:29.831 --- 10.0.0.2 ping statistics --- 00:16:29.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.831 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:29.831 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:29.831 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.135 ms 00:16:29.831 00:16:29.831 --- 10.0.0.1 ping statistics --- 00:16:29.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:29.831 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=935718 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 935718 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 935718 ']' 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:29.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.831 [2024-12-13 06:22:20.676097] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:16:29.831 [2024-12-13 06:22:20.676139] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:29.831 [2024-12-13 06:22:20.754950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:29.831 [2024-12-13 06:22:20.777315] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:29.831 [2024-12-13 06:22:20.777354] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:29.831 [2024-12-13 06:22:20.777362] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:29.831 [2024-12-13 06:22:20.777368] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:29.831 [2024-12-13 06:22:20.777373] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:29.831 [2024-12-13 06:22:20.778855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:29.831 [2024-12-13 06:22:20.778962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:29.831 [2024-12-13 06:22:20.779075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:29.831 [2024-12-13 06:22:20.779075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.831 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:29.831 "tick_rate": 2100000000, 00:16:29.831 "poll_groups": [ 00:16:29.831 { 00:16:29.831 "name": "nvmf_tgt_poll_group_000", 00:16:29.831 "admin_qpairs": 0, 00:16:29.831 "io_qpairs": 0, 00:16:29.831 "current_admin_qpairs": 0, 00:16:29.831 "current_io_qpairs": 0, 00:16:29.831 "pending_bdev_io": 0, 00:16:29.831 "completed_nvme_io": 0, 00:16:29.831 "transports": [] 00:16:29.831 }, 00:16:29.831 { 00:16:29.831 "name": "nvmf_tgt_poll_group_001", 00:16:29.831 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [] 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_002", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [] 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_003", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [] 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 }' 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:29.832 06:22:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 [2024-12-13 06:22:21.027539] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:29.832 "tick_rate": 2100000000, 00:16:29.832 "poll_groups": [ 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_000", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [ 00:16:29.832 { 00:16:29.832 "trtype": "TCP" 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_001", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [ 00:16:29.832 { 00:16:29.832 "trtype": "TCP" 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_002", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [ 00:16:29.832 { 00:16:29.832 "trtype": "TCP" 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 }, 00:16:29.832 { 00:16:29.832 "name": "nvmf_tgt_poll_group_003", 00:16:29.832 "admin_qpairs": 0, 00:16:29.832 "io_qpairs": 0, 00:16:29.832 "current_admin_qpairs": 0, 00:16:29.832 "current_io_qpairs": 0, 00:16:29.832 "pending_bdev_io": 0, 00:16:29.832 "completed_nvme_io": 0, 00:16:29.832 "transports": [ 00:16:29.832 { 00:16:29.832 "trtype": "TCP" 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 } 00:16:29.832 ] 00:16:29.832 }' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 Malloc1 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.832 [2024-12-13 06:22:21.212729] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:29.832 [2024-12-13 06:22:21.241312] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:29.832 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:29.832 could not add new controller: failed to write to nvme-fabrics device 00:16:29.832 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.833 06:22:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:31.206 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:31.206 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:31.206 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:31.206 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:31.206 06:22:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:33.104 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:33.104 [2024-12-13 06:22:24.596352] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:33.104 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:33.104 could not add new controller: failed to write to nvme-fabrics device 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.104 06:22:24 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:34.476 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:34.476 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:34.476 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:34.476 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:34.476 06:22:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:36.372 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.372 [2024-12-13 06:22:27.927946] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.372 06:22:27 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:37.743 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:37.743 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:37.743 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:37.743 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:37.743 06:22:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:39.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 [2024-12-13 06:22:31.278097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.640 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:39.897 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.897 06:22:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:40.830 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:40.830 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:40.830 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:40.830 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:40.830 06:22:32 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:42.767 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:43.052 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 [2024-12-13 06:22:34.538860] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.052 06:22:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:44.436 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:44.436 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:44.436 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:44.436 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:44.436 06:22:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:46.333 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:46.334 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 [2024-12-13 06:22:37.931032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.334 06:22:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.712 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:47.712 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:47.712 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:47.712 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:47.712 06:22:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:49.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.609 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.609 [2024-12-13 06:22:41.261594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.868 06:22:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:50.801 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:50.801 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:50.801 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:50.801 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:50.801 06:22:42 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 [2024-12-13 06:22:44.572763] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.328 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 [2024-12-13 06:22:44.624854] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 [2024-12-13 06:22:44.672982] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 [2024-12-13 06:22:44.721146] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 [2024-12-13 06:22:44.773331] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.329 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:16:53.329 "tick_rate": 2100000000, 00:16:53.329 "poll_groups": [ 00:16:53.329 { 00:16:53.329 "name": "nvmf_tgt_poll_group_000", 00:16:53.329 "admin_qpairs": 2, 00:16:53.329 "io_qpairs": 168, 00:16:53.329 "current_admin_qpairs": 0, 00:16:53.329 "current_io_qpairs": 0, 00:16:53.329 "pending_bdev_io": 0, 00:16:53.329 "completed_nvme_io": 268, 00:16:53.329 "transports": [ 00:16:53.329 { 00:16:53.329 "trtype": "TCP" 00:16:53.329 } 00:16:53.329 ] 00:16:53.329 }, 00:16:53.329 { 00:16:53.329 "name": "nvmf_tgt_poll_group_001", 00:16:53.329 "admin_qpairs": 2, 00:16:53.329 "io_qpairs": 168, 00:16:53.329 "current_admin_qpairs": 0, 00:16:53.329 "current_io_qpairs": 0, 00:16:53.329 "pending_bdev_io": 0, 00:16:53.329 "completed_nvme_io": 267, 00:16:53.329 "transports": [ 00:16:53.329 { 00:16:53.329 "trtype": "TCP" 00:16:53.329 } 00:16:53.329 ] 00:16:53.329 }, 00:16:53.329 { 00:16:53.330 "name": "nvmf_tgt_poll_group_002", 00:16:53.330 "admin_qpairs": 1, 00:16:53.330 "io_qpairs": 168, 00:16:53.330 "current_admin_qpairs": 0, 00:16:53.330 "current_io_qpairs": 0, 00:16:53.330 "pending_bdev_io": 0, 00:16:53.330 "completed_nvme_io": 219, 00:16:53.330 "transports": [ 00:16:53.330 { 00:16:53.330 "trtype": "TCP" 00:16:53.330 } 00:16:53.330 ] 00:16:53.330 }, 00:16:53.330 { 00:16:53.330 "name": "nvmf_tgt_poll_group_003", 00:16:53.330 "admin_qpairs": 2, 00:16:53.330 "io_qpairs": 168, 00:16:53.330 "current_admin_qpairs": 0, 00:16:53.330 "current_io_qpairs": 0, 00:16:53.330 "pending_bdev_io": 0, 00:16:53.330 "completed_nvme_io": 268, 00:16:53.330 "transports": [ 00:16:53.330 { 00:16:53.330 "trtype": "TCP" 00:16:53.330 } 00:16:53.330 ] 00:16:53.330 } 00:16:53.330 ] 00:16:53.330 }' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:53.330 rmmod nvme_tcp 00:16:53.330 rmmod nvme_fabrics 00:16:53.330 rmmod nvme_keyring 00:16:53.330 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 935718 ']' 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 935718 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 935718 ']' 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 935718 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.589 06:22:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 935718 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 935718' 00:16:53.589 killing process with pid 935718 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 935718 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 935718 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:53.589 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:53.590 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.590 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:53.590 06:22:45 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:56.127 00:16:56.127 real 0m32.833s 00:16:56.127 user 1m39.023s 00:16:56.127 sys 0m6.413s 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:56.127 ************************************ 00:16:56.127 END TEST nvmf_rpc 00:16:56.127 ************************************ 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:56.127 ************************************ 00:16:56.127 START TEST nvmf_invalid 00:16:56.127 ************************************ 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:16:56.127 * Looking for test storage... 00:16:56.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:56.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.127 --rc genhtml_branch_coverage=1 00:16:56.127 --rc genhtml_function_coverage=1 00:16:56.127 --rc genhtml_legend=1 00:16:56.127 --rc geninfo_all_blocks=1 00:16:56.127 --rc geninfo_unexecuted_blocks=1 00:16:56.127 00:16:56.127 ' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:56.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.127 --rc genhtml_branch_coverage=1 00:16:56.127 --rc genhtml_function_coverage=1 00:16:56.127 --rc genhtml_legend=1 00:16:56.127 --rc geninfo_all_blocks=1 00:16:56.127 --rc geninfo_unexecuted_blocks=1 00:16:56.127 00:16:56.127 ' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:56.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.127 --rc genhtml_branch_coverage=1 00:16:56.127 --rc genhtml_function_coverage=1 00:16:56.127 --rc genhtml_legend=1 00:16:56.127 --rc geninfo_all_blocks=1 00:16:56.127 --rc geninfo_unexecuted_blocks=1 00:16:56.127 00:16:56.127 ' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:56.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:56.127 --rc genhtml_branch_coverage=1 00:16:56.127 --rc genhtml_function_coverage=1 00:16:56.127 --rc genhtml_legend=1 00:16:56.127 --rc geninfo_all_blocks=1 00:16:56.127 --rc geninfo_unexecuted_blocks=1 00:16:56.127 00:16:56.127 ' 00:16:56.127 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:56.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:16:56.128 06:22:47 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:02.700 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:02.700 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:02.700 Found net devices under 0000:af:00.0: cvl_0_0 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:02.700 Found net devices under 0000:af:00.1: cvl_0_1 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:02.700 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:02.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.462 ms 00:17:02.701 00:17:02.701 --- 10.0.0.2 ping statistics --- 00:17:02.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.701 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:17:02.701 00:17:02.701 --- 10.0.0.1 ping statistics --- 00:17:02.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.701 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=943216 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 943216 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 943216 ']' 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.701 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.701 [2024-12-13 06:22:53.679187] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:02.701 [2024-12-13 06:22:53.679231] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.701 [2024-12-13 06:22:53.758849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:02.701 [2024-12-13 06:22:53.782043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.701 [2024-12-13 06:22:53.782081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.701 [2024-12-13 06:22:53.782089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.701 [2024-12-13 06:22:53.782095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.701 [2024-12-13 06:22:53.782101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.701 [2024-12-13 06:22:53.783465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:02.701 [2024-12-13 06:22:53.783557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:02.701 [2024-12-13 06:22:53.783664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.701 [2024-12-13 06:22:53.783665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:02.701 06:22:53 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8557 00:17:02.701 [2024-12-13 06:22:54.093028] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:02.701 { 00:17:02.701 "nqn": "nqn.2016-06.io.spdk:cnode8557", 00:17:02.701 "tgt_name": "foobar", 00:17:02.701 "method": "nvmf_create_subsystem", 00:17:02.701 "req_id": 1 00:17:02.701 } 00:17:02.701 Got JSON-RPC error response 00:17:02.701 response: 00:17:02.701 { 00:17:02.701 "code": -32603, 00:17:02.701 "message": "Unable to find target foobar" 00:17:02.701 }' 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:02.701 { 00:17:02.701 "nqn": "nqn.2016-06.io.spdk:cnode8557", 00:17:02.701 "tgt_name": "foobar", 00:17:02.701 "method": "nvmf_create_subsystem", 00:17:02.701 "req_id": 1 00:17:02.701 } 00:17:02.701 Got JSON-RPC error response 00:17:02.701 response: 00:17:02.701 { 00:17:02.701 "code": -32603, 00:17:02.701 "message": "Unable to find target foobar" 00:17:02.701 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17002 00:17:02.701 [2024-12-13 06:22:54.297742] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17002: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:02.701 { 00:17:02.701 "nqn": "nqn.2016-06.io.spdk:cnode17002", 00:17:02.701 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:02.701 "method": "nvmf_create_subsystem", 00:17:02.701 "req_id": 1 00:17:02.701 } 00:17:02.701 Got JSON-RPC error response 00:17:02.701 response: 00:17:02.701 { 00:17:02.701 "code": -32602, 00:17:02.701 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:02.701 }' 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:02.701 { 00:17:02.701 "nqn": "nqn.2016-06.io.spdk:cnode17002", 00:17:02.701 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:02.701 "method": "nvmf_create_subsystem", 00:17:02.701 "req_id": 1 00:17:02.701 } 00:17:02.701 Got JSON-RPC error response 00:17:02.701 response: 00:17:02.701 { 00:17:02.701 "code": -32602, 00:17:02.701 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:02.701 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:02.701 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32364 00:17:02.960 [2024-12-13 06:22:54.498399] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32364: invalid model number 'SPDK_Controller' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:02.960 { 00:17:02.960 "nqn": "nqn.2016-06.io.spdk:cnode32364", 00:17:02.960 "model_number": "SPDK_Controller\u001f", 00:17:02.960 "method": "nvmf_create_subsystem", 00:17:02.960 "req_id": 1 00:17:02.960 } 00:17:02.960 Got JSON-RPC error response 00:17:02.960 response: 00:17:02.960 { 00:17:02.960 "code": -32602, 00:17:02.960 "message": "Invalid MN SPDK_Controller\u001f" 00:17:02.960 }' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:02.960 { 00:17:02.960 "nqn": "nqn.2016-06.io.spdk:cnode32364", 00:17:02.960 "model_number": "SPDK_Controller\u001f", 00:17:02.960 "method": "nvmf_create_subsystem", 00:17:02.960 "req_id": 1 00:17:02.960 } 00:17:02.960 Got JSON-RPC error response 00:17:02.960 response: 00:17:02.960 { 00:17:02.960 "code": -32602, 00:17:02.960 "message": "Invalid MN SPDK_Controller\u001f" 00:17:02.960 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:02.960 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ( == \- ]] 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '(Lp_.lR{!NY2{-i)i02nZ' 00:17:03.218 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '(Lp_.lR{!NY2{-i)i02nZ' nqn.2016-06.io.spdk:cnode31331 00:17:03.218 [2024-12-13 06:22:54.843560] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31331: invalid serial number '(Lp_.lR{!NY2{-i)i02nZ' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:03.477 { 00:17:03.477 "nqn": "nqn.2016-06.io.spdk:cnode31331", 00:17:03.477 "serial_number": "(Lp_.lR{!NY2{-i)i02nZ", 00:17:03.477 "method": "nvmf_create_subsystem", 00:17:03.477 "req_id": 1 00:17:03.477 } 00:17:03.477 Got JSON-RPC error response 00:17:03.477 response: 00:17:03.477 { 00:17:03.477 "code": -32602, 00:17:03.477 "message": "Invalid SN (Lp_.lR{!NY2{-i)i02nZ" 00:17:03.477 }' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:03.477 { 00:17:03.477 "nqn": "nqn.2016-06.io.spdk:cnode31331", 00:17:03.477 "serial_number": "(Lp_.lR{!NY2{-i)i02nZ", 00:17:03.477 "method": "nvmf_create_subsystem", 00:17:03.477 "req_id": 1 00:17:03.477 } 00:17:03.477 Got JSON-RPC error response 00:17:03.477 response: 00:17:03.477 { 00:17:03.477 "code": -32602, 00:17:03.477 "message": "Invalid SN (Lp_.lR{!NY2{-i)i02nZ" 00:17:03.477 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:03.477 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:03.478 06:22:54 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.478 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.479 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ] == \- ]] 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ']aF-L]#@Q=Cw>?aj%cgffEMM'\''6x,g[z#/2xC+7~G' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d ']aF-L]#@Q=Cw>?aj%cgffEMM'\''6x,g[z#/2xC+7~G' nqn.2016-06.io.spdk:cnode12442 00:17:03.736 [2024-12-13 06:22:55.313042] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12442: invalid model number ']aF-L]#@Q=Cw>?aj%cgffEMM'6x,g[z#/2xC+7~G' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:17:03.736 { 00:17:03.736 "nqn": "nqn.2016-06.io.spdk:cnode12442", 00:17:03.736 "model_number": "]aF-L]#@Q=Cw>?aj%cgffEMM'\''6x,g\u007f[z#/2xC+7~G", 00:17:03.736 "method": "nvmf_create_subsystem", 00:17:03.736 "req_id": 1 00:17:03.736 } 00:17:03.736 Got JSON-RPC error response 00:17:03.736 response: 00:17:03.736 { 00:17:03.736 "code": -32602, 00:17:03.736 "message": "Invalid MN ]aF-L]#@Q=Cw>?aj%cgffEMM'\''6x,g\u007f[z#/2xC+7~G" 00:17:03.736 }' 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:17:03.736 { 00:17:03.736 "nqn": "nqn.2016-06.io.spdk:cnode12442", 00:17:03.736 "model_number": "]aF-L]#@Q=Cw>?aj%cgffEMM'6x,g\u007f[z#/2xC+7~G", 00:17:03.736 "method": "nvmf_create_subsystem", 00:17:03.736 "req_id": 1 00:17:03.736 } 00:17:03.736 Got JSON-RPC error response 00:17:03.736 response: 00:17:03.736 { 00:17:03.736 "code": -32602, 00:17:03.736 "message": "Invalid MN ]aF-L]#@Q=Cw>?aj%cgffEMM'6x,g\u007f[z#/2xC+7~G" 00:17:03.736 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:03.736 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:17:03.994 [2024-12-13 06:22:55.529831] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.994 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:17:04.251 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:17:04.252 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:17:04.252 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:17:04.252 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:17:04.252 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:17:04.510 [2024-12-13 06:22:55.936360] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:17:04.510 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:17:04.510 { 00:17:04.510 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.510 "listen_address": { 00:17:04.510 "trtype": "tcp", 00:17:04.510 "traddr": "", 00:17:04.510 "trsvcid": "4421" 00:17:04.510 }, 00:17:04.510 "method": "nvmf_subsystem_remove_listener", 00:17:04.510 "req_id": 1 00:17:04.510 } 00:17:04.510 Got JSON-RPC error response 00:17:04.510 response: 00:17:04.510 { 00:17:04.510 "code": -32602, 00:17:04.510 "message": "Invalid parameters" 00:17:04.510 }' 00:17:04.510 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:17:04.510 { 00:17:04.510 "nqn": "nqn.2016-06.io.spdk:cnode", 00:17:04.510 "listen_address": { 00:17:04.510 "trtype": "tcp", 00:17:04.510 "traddr": "", 00:17:04.510 "trsvcid": "4421" 00:17:04.510 }, 00:17:04.510 "method": "nvmf_subsystem_remove_listener", 00:17:04.510 "req_id": 1 00:17:04.510 } 00:17:04.510 Got JSON-RPC error response 00:17:04.510 response: 00:17:04.510 { 00:17:04.510 "code": -32602, 00:17:04.510 "message": "Invalid parameters" 00:17:04.510 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:17:04.510 06:22:55 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6436 -i 0 00:17:04.510 [2024-12-13 06:22:56.132999] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6436: invalid cntlid range [0-65519] 00:17:04.510 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:17:04.510 { 00:17:04.510 "nqn": "nqn.2016-06.io.spdk:cnode6436", 00:17:04.510 "min_cntlid": 0, 00:17:04.510 "method": "nvmf_create_subsystem", 00:17:04.510 "req_id": 1 00:17:04.510 } 00:17:04.510 Got JSON-RPC error response 00:17:04.510 response: 00:17:04.510 { 00:17:04.510 "code": -32602, 00:17:04.510 "message": "Invalid cntlid range [0-65519]" 00:17:04.510 }' 00:17:04.510 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:17:04.510 { 00:17:04.510 "nqn": "nqn.2016-06.io.spdk:cnode6436", 00:17:04.510 "min_cntlid": 0, 00:17:04.510 "method": "nvmf_create_subsystem", 00:17:04.510 "req_id": 1 00:17:04.510 } 00:17:04.510 Got JSON-RPC error response 00:17:04.510 response: 00:17:04.510 { 00:17:04.510 "code": -32602, 00:17:04.510 "message": "Invalid cntlid range [0-65519]" 00:17:04.510 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.768 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26030 -i 65520 00:17:04.768 [2024-12-13 06:22:56.321623] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26030: invalid cntlid range [65520-65519] 00:17:04.768 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:17:04.768 { 00:17:04.768 "nqn": "nqn.2016-06.io.spdk:cnode26030", 00:17:04.768 "min_cntlid": 65520, 00:17:04.768 "method": "nvmf_create_subsystem", 00:17:04.768 "req_id": 1 00:17:04.768 } 00:17:04.768 Got JSON-RPC error response 00:17:04.768 response: 00:17:04.768 { 00:17:04.768 "code": -32602, 00:17:04.768 "message": "Invalid cntlid range [65520-65519]" 00:17:04.768 }' 00:17:04.768 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:17:04.768 { 00:17:04.768 "nqn": "nqn.2016-06.io.spdk:cnode26030", 00:17:04.768 "min_cntlid": 65520, 00:17:04.768 "method": "nvmf_create_subsystem", 00:17:04.768 "req_id": 1 00:17:04.768 } 00:17:04.768 Got JSON-RPC error response 00:17:04.768 response: 00:17:04.768 { 00:17:04.768 "code": -32602, 00:17:04.768 "message": "Invalid cntlid range [65520-65519]" 00:17:04.768 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:04.768 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15278 -I 0 00:17:05.026 [2024-12-13 06:22:56.526355] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15278: invalid cntlid range [1-0] 00:17:05.026 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:17:05.026 { 00:17:05.026 "nqn": "nqn.2016-06.io.spdk:cnode15278", 00:17:05.026 "max_cntlid": 0, 00:17:05.026 "method": "nvmf_create_subsystem", 00:17:05.026 "req_id": 1 00:17:05.026 } 00:17:05.026 Got JSON-RPC error response 00:17:05.026 response: 00:17:05.026 { 00:17:05.026 "code": -32602, 00:17:05.026 "message": "Invalid cntlid range [1-0]" 00:17:05.026 }' 00:17:05.026 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:17:05.026 { 00:17:05.026 "nqn": "nqn.2016-06.io.spdk:cnode15278", 00:17:05.026 "max_cntlid": 0, 00:17:05.026 "method": "nvmf_create_subsystem", 00:17:05.026 "req_id": 1 00:17:05.026 } 00:17:05.026 Got JSON-RPC error response 00:17:05.026 response: 00:17:05.026 { 00:17:05.026 "code": -32602, 00:17:05.026 "message": "Invalid cntlid range [1-0]" 00:17:05.026 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.026 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5140 -I 65520 00:17:05.283 [2024-12-13 06:22:56.731041] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5140: invalid cntlid range [1-65520] 00:17:05.283 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:17:05.283 { 00:17:05.283 "nqn": "nqn.2016-06.io.spdk:cnode5140", 00:17:05.283 "max_cntlid": 65520, 00:17:05.283 "method": "nvmf_create_subsystem", 00:17:05.283 "req_id": 1 00:17:05.283 } 00:17:05.283 Got JSON-RPC error response 00:17:05.283 response: 00:17:05.283 { 00:17:05.283 "code": -32602, 00:17:05.283 "message": "Invalid cntlid range [1-65520]" 00:17:05.283 }' 00:17:05.283 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:17:05.283 { 00:17:05.283 "nqn": "nqn.2016-06.io.spdk:cnode5140", 00:17:05.283 "max_cntlid": 65520, 00:17:05.283 "method": "nvmf_create_subsystem", 00:17:05.283 "req_id": 1 00:17:05.283 } 00:17:05.283 Got JSON-RPC error response 00:17:05.283 response: 00:17:05.283 { 00:17:05.283 "code": -32602, 00:17:05.283 "message": "Invalid cntlid range [1-65520]" 00:17:05.283 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.283 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode21455 -i 6 -I 5 00:17:05.541 [2024-12-13 06:22:56.947793] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21455: invalid cntlid range [6-5] 00:17:05.541 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:17:05.541 { 00:17:05.541 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:17:05.541 "min_cntlid": 6, 00:17:05.541 "max_cntlid": 5, 00:17:05.541 "method": "nvmf_create_subsystem", 00:17:05.541 "req_id": 1 00:17:05.541 } 00:17:05.541 Got JSON-RPC error response 00:17:05.541 response: 00:17:05.541 { 00:17:05.541 "code": -32602, 00:17:05.541 "message": "Invalid cntlid range [6-5]" 00:17:05.541 }' 00:17:05.541 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:17:05.541 { 00:17:05.541 "nqn": "nqn.2016-06.io.spdk:cnode21455", 00:17:05.541 "min_cntlid": 6, 00:17:05.541 "max_cntlid": 5, 00:17:05.541 "method": "nvmf_create_subsystem", 00:17:05.541 "req_id": 1 00:17:05.541 } 00:17:05.541 Got JSON-RPC error response 00:17:05.541 response: 00:17:05.541 { 00:17:05.541 "code": -32602, 00:17:05.541 "message": "Invalid cntlid range [6-5]" 00:17:05.541 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:17:05.541 06:22:56 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:17:05.541 { 00:17:05.541 "name": "foobar", 00:17:05.541 "method": "nvmf_delete_target", 00:17:05.541 "req_id": 1 00:17:05.541 } 00:17:05.541 Got JSON-RPC error response 00:17:05.541 response: 00:17:05.541 { 00:17:05.541 "code": -32602, 00:17:05.541 "message": "The specified target doesn'\''t exist, cannot delete it." 00:17:05.541 }' 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:17:05.541 { 00:17:05.541 "name": "foobar", 00:17:05.541 "method": "nvmf_delete_target", 00:17:05.541 "req_id": 1 00:17:05.541 } 00:17:05.541 Got JSON-RPC error response 00:17:05.541 response: 00:17:05.541 { 00:17:05.541 "code": -32602, 00:17:05.541 "message": "The specified target doesn't exist, cannot delete it." 00:17:05.541 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # sync 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set +e 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:05.541 rmmod nvme_tcp 00:17:05.541 rmmod nvme_fabrics 00:17:05.541 rmmod nvme_keyring 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@128 -- # set -e 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # return 0 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@517 -- # '[' -n 943216 ']' 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@518 -- # killprocess 943216 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 943216 ']' 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 943216 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.541 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 943216 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 943216' 00:17:05.801 killing process with pid 943216 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 943216 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 943216 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # iptr 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-save 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@791 -- # iptables-restore 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:05.801 06:22:57 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:08.336 00:17:08.336 real 0m12.068s 00:17:08.336 user 0m18.617s 00:17:08.336 sys 0m5.322s 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:08.336 ************************************ 00:17:08.336 END TEST nvmf_invalid 00:17:08.336 ************************************ 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.336 ************************************ 00:17:08.336 START TEST nvmf_connect_stress 00:17:08.336 ************************************ 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:08.336 * Looking for test storage... 00:17:08.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:08.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.336 --rc genhtml_branch_coverage=1 00:17:08.336 --rc genhtml_function_coverage=1 00:17:08.336 --rc genhtml_legend=1 00:17:08.336 --rc geninfo_all_blocks=1 00:17:08.336 --rc geninfo_unexecuted_blocks=1 00:17:08.336 00:17:08.336 ' 00:17:08.336 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:08.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.336 --rc genhtml_branch_coverage=1 00:17:08.336 --rc genhtml_function_coverage=1 00:17:08.336 --rc genhtml_legend=1 00:17:08.336 --rc geninfo_all_blocks=1 00:17:08.336 --rc geninfo_unexecuted_blocks=1 00:17:08.336 00:17:08.336 ' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.337 --rc genhtml_branch_coverage=1 00:17:08.337 --rc genhtml_function_coverage=1 00:17:08.337 --rc genhtml_legend=1 00:17:08.337 --rc geninfo_all_blocks=1 00:17:08.337 --rc geninfo_unexecuted_blocks=1 00:17:08.337 00:17:08.337 ' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:08.337 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.337 --rc genhtml_branch_coverage=1 00:17:08.337 --rc genhtml_function_coverage=1 00:17:08.337 --rc genhtml_legend=1 00:17:08.337 --rc geninfo_all_blocks=1 00:17:08.337 --rc geninfo_unexecuted_blocks=1 00:17:08.337 00:17:08.337 ' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.337 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:08.337 06:22:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:14.905 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:14.905 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:14.905 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:14.906 Found net devices under 0000:af:00.0: cvl_0_0 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:14.906 Found net devices under 0000:af:00.1: cvl_0_1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:14.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:14.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.440 ms 00:17:14.906 00:17:14.906 --- 10.0.0.2 ping statistics --- 00:17:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.906 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:14.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:14.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:17:14.906 00:17:14.906 --- 10.0.0.1 ping statistics --- 00:17:14.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:14.906 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=947518 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 947518 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 947518 ']' 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:14.906 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.906 [2024-12-13 06:23:05.674787] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:14.906 [2024-12-13 06:23:05.674830] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:14.906 [2024-12-13 06:23:05.752436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:14.906 [2024-12-13 06:23:05.774465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:14.906 [2024-12-13 06:23:05.774505] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:14.906 [2024-12-13 06:23:05.774512] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:14.906 [2024-12-13 06:23:05.774518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:14.907 [2024-12-13 06:23:05.774523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:14.907 [2024-12-13 06:23:05.775729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.907 [2024-12-13 06:23:05.775837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:14.907 [2024-12-13 06:23:05.775837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 [2024-12-13 06:23:05.918957] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 [2024-12-13 06:23:05.939190] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 NULL1 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=947543 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.907 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.165 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.165 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:15.165 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.165 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.165 06:23:06 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.423 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.423 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:15.423 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.423 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.423 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:15.988 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.988 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:15.988 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:15.988 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.988 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.246 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.246 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:16.246 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.246 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.246 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.504 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.504 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:16.504 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.504 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.504 06:23:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:16.762 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.762 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:16.762 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:16.762 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.762 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.019 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.020 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:17.020 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.020 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.020 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.585 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.585 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:17.585 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.585 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.585 06:23:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:17.843 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.843 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:17.843 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:17.843 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.843 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.102 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.102 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:18.102 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.102 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.102 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.359 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.360 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:18.360 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.360 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.360 06:23:09 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:18.617 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.617 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:18.617 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:18.617 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.617 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.183 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.183 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:19.183 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.183 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.183 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.441 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.441 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:19.441 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.441 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.441 06:23:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.698 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.698 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:19.699 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.699 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.699 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:19.956 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.956 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:19.956 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:19.956 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.956 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.522 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.522 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:20.522 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.522 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.522 06:23:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:20.780 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.780 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:20.780 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:20.780 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.780 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.037 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.038 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:21.038 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.038 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.038 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.295 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.295 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:21.295 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.295 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.295 06:23:12 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:21.553 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.553 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:21.553 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:21.553 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.553 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.119 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.119 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:22.119 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.119 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.119 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.377 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.377 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:22.377 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.377 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.377 06:23:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.635 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.635 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:22.635 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.635 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.635 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:22.893 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.893 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:22.893 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:22.893 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.893 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.458 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.458 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:23.458 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.458 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.458 06:23:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.715 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.715 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:23.715 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.715 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.715 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:23.972 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.972 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:23.972 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:23.972 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.972 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.229 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.229 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:24.230 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:24.230 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.230 06:23:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:24.488 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 947543 00:17:24.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (947543) - No such process 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 947543 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:24.488 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:24.746 rmmod nvme_tcp 00:17:24.746 rmmod nvme_fabrics 00:17:24.746 rmmod nvme_keyring 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 947518 ']' 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 947518 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 947518 ']' 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 947518 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 947518 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 947518' 00:17:24.746 killing process with pid 947518 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 947518 00:17:24.746 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 947518 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:25.005 06:23:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:26.912 00:17:26.912 real 0m18.971s 00:17:26.912 user 0m39.309s 00:17:26.912 sys 0m8.574s 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:26.912 ************************************ 00:17:26.912 END TEST nvmf_connect_stress 00:17:26.912 ************************************ 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.912 ************************************ 00:17:26.912 START TEST nvmf_fused_ordering 00:17:26.912 ************************************ 00:17:26.912 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:27.171 * Looking for test storage... 00:17:27.171 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:27.171 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:27.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.172 --rc genhtml_branch_coverage=1 00:17:27.172 --rc genhtml_function_coverage=1 00:17:27.172 --rc genhtml_legend=1 00:17:27.172 --rc geninfo_all_blocks=1 00:17:27.172 --rc geninfo_unexecuted_blocks=1 00:17:27.172 00:17:27.172 ' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:27.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.172 --rc genhtml_branch_coverage=1 00:17:27.172 --rc genhtml_function_coverage=1 00:17:27.172 --rc genhtml_legend=1 00:17:27.172 --rc geninfo_all_blocks=1 00:17:27.172 --rc geninfo_unexecuted_blocks=1 00:17:27.172 00:17:27.172 ' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:27.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.172 --rc genhtml_branch_coverage=1 00:17:27.172 --rc genhtml_function_coverage=1 00:17:27.172 --rc genhtml_legend=1 00:17:27.172 --rc geninfo_all_blocks=1 00:17:27.172 --rc geninfo_unexecuted_blocks=1 00:17:27.172 00:17:27.172 ' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:27.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:27.172 --rc genhtml_branch_coverage=1 00:17:27.172 --rc genhtml_function_coverage=1 00:17:27.172 --rc genhtml_legend=1 00:17:27.172 --rc geninfo_all_blocks=1 00:17:27.172 --rc geninfo_unexecuted_blocks=1 00:17:27.172 00:17:27.172 ' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:27.172 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:27.172 06:23:18 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:33.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:33.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:33.742 Found net devices under 0000:af:00.0: cvl_0_0 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:33.742 Found net devices under 0000:af:00.1: cvl_0_1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:33.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:33.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:17:33.742 00:17:33.742 --- 10.0.0.2 ping statistics --- 00:17:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.742 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:33.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:33.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.226 ms 00:17:33.742 00:17:33.742 --- 10.0.0.1 ping statistics --- 00:17:33.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:33.742 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=952674 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 952674 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 952674 ']' 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:33.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:33.742 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.742 [2024-12-13 06:23:24.728543] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:33.742 [2024-12-13 06:23:24.728591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:33.743 [2024-12-13 06:23:24.810874] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:33.743 [2024-12-13 06:23:24.832207] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:33.743 [2024-12-13 06:23:24.832240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:33.743 [2024-12-13 06:23:24.832247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:33.743 [2024-12-13 06:23:24.832254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:33.743 [2024-12-13 06:23:24.832259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:33.743 [2024-12-13 06:23:24.832738] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 [2024-12-13 06:23:24.967860] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 [2024-12-13 06:23:24.988030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 NULL1 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:33.743 06:23:24 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.743 06:23:25 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:33.743 [2024-12-13 06:23:25.046817] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:33.743 [2024-12-13 06:23:25.046859] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952826 ] 00:17:33.743 Attached to nqn.2016-06.io.spdk:cnode1 00:17:33.743 Namespace ID: 1 size: 1GB 00:17:33.743 fused_ordering(0) 00:17:33.743 fused_ordering(1) 00:17:33.743 fused_ordering(2) 00:17:33.743 fused_ordering(3) 00:17:33.743 fused_ordering(4) 00:17:33.743 fused_ordering(5) 00:17:33.743 fused_ordering(6) 00:17:33.743 fused_ordering(7) 00:17:33.743 fused_ordering(8) 00:17:33.743 fused_ordering(9) 00:17:33.743 fused_ordering(10) 00:17:33.743 fused_ordering(11) 00:17:33.743 fused_ordering(12) 00:17:33.743 fused_ordering(13) 00:17:33.743 fused_ordering(14) 00:17:33.743 fused_ordering(15) 00:17:33.743 fused_ordering(16) 00:17:33.743 fused_ordering(17) 00:17:33.743 fused_ordering(18) 00:17:33.743 fused_ordering(19) 00:17:33.743 fused_ordering(20) 00:17:33.743 fused_ordering(21) 00:17:33.743 fused_ordering(22) 00:17:33.743 fused_ordering(23) 00:17:33.743 fused_ordering(24) 00:17:33.743 fused_ordering(25) 00:17:33.743 fused_ordering(26) 00:17:33.743 fused_ordering(27) 00:17:33.743 fused_ordering(28) 00:17:33.743 fused_ordering(29) 00:17:33.743 fused_ordering(30) 00:17:33.743 fused_ordering(31) 00:17:33.743 fused_ordering(32) 00:17:33.743 fused_ordering(33) 00:17:33.743 fused_ordering(34) 00:17:33.743 fused_ordering(35) 00:17:33.743 fused_ordering(36) 00:17:33.743 fused_ordering(37) 00:17:33.743 fused_ordering(38) 00:17:33.743 fused_ordering(39) 00:17:33.743 fused_ordering(40) 00:17:33.743 fused_ordering(41) 00:17:33.743 fused_ordering(42) 00:17:33.743 fused_ordering(43) 00:17:33.743 fused_ordering(44) 00:17:33.743 fused_ordering(45) 00:17:33.743 fused_ordering(46) 00:17:33.743 fused_ordering(47) 00:17:33.743 fused_ordering(48) 00:17:33.743 fused_ordering(49) 00:17:33.743 fused_ordering(50) 00:17:33.743 fused_ordering(51) 00:17:33.743 fused_ordering(52) 00:17:33.743 fused_ordering(53) 00:17:33.743 fused_ordering(54) 00:17:33.743 fused_ordering(55) 00:17:33.743 fused_ordering(56) 00:17:33.743 fused_ordering(57) 00:17:33.743 fused_ordering(58) 00:17:33.743 fused_ordering(59) 00:17:33.743 fused_ordering(60) 00:17:33.743 fused_ordering(61) 00:17:33.743 fused_ordering(62) 00:17:33.743 fused_ordering(63) 00:17:33.743 fused_ordering(64) 00:17:33.743 fused_ordering(65) 00:17:33.743 fused_ordering(66) 00:17:33.743 fused_ordering(67) 00:17:33.743 fused_ordering(68) 00:17:33.743 fused_ordering(69) 00:17:33.743 fused_ordering(70) 00:17:33.743 fused_ordering(71) 00:17:33.743 fused_ordering(72) 00:17:33.743 fused_ordering(73) 00:17:33.743 fused_ordering(74) 00:17:33.743 fused_ordering(75) 00:17:33.743 fused_ordering(76) 00:17:33.743 fused_ordering(77) 00:17:33.743 fused_ordering(78) 00:17:33.743 fused_ordering(79) 00:17:33.743 fused_ordering(80) 00:17:33.743 fused_ordering(81) 00:17:33.743 fused_ordering(82) 00:17:33.743 fused_ordering(83) 00:17:33.743 fused_ordering(84) 00:17:33.743 fused_ordering(85) 00:17:33.743 fused_ordering(86) 00:17:33.743 fused_ordering(87) 00:17:33.743 fused_ordering(88) 00:17:33.743 fused_ordering(89) 00:17:33.743 fused_ordering(90) 00:17:33.743 fused_ordering(91) 00:17:33.743 fused_ordering(92) 00:17:33.743 fused_ordering(93) 00:17:33.743 fused_ordering(94) 00:17:33.743 fused_ordering(95) 00:17:33.743 fused_ordering(96) 00:17:33.743 fused_ordering(97) 00:17:33.743 fused_ordering(98) 00:17:33.743 fused_ordering(99) 00:17:33.743 fused_ordering(100) 00:17:33.743 fused_ordering(101) 00:17:33.743 fused_ordering(102) 00:17:33.743 fused_ordering(103) 00:17:33.743 fused_ordering(104) 00:17:33.743 fused_ordering(105) 00:17:33.743 fused_ordering(106) 00:17:33.743 fused_ordering(107) 00:17:33.743 fused_ordering(108) 00:17:33.743 fused_ordering(109) 00:17:33.743 fused_ordering(110) 00:17:33.743 fused_ordering(111) 00:17:33.743 fused_ordering(112) 00:17:33.743 fused_ordering(113) 00:17:33.743 fused_ordering(114) 00:17:33.743 fused_ordering(115) 00:17:33.743 fused_ordering(116) 00:17:33.743 fused_ordering(117) 00:17:33.743 fused_ordering(118) 00:17:33.743 fused_ordering(119) 00:17:33.743 fused_ordering(120) 00:17:33.743 fused_ordering(121) 00:17:33.743 fused_ordering(122) 00:17:33.743 fused_ordering(123) 00:17:33.743 fused_ordering(124) 00:17:33.743 fused_ordering(125) 00:17:33.743 fused_ordering(126) 00:17:33.743 fused_ordering(127) 00:17:33.743 fused_ordering(128) 00:17:33.743 fused_ordering(129) 00:17:33.743 fused_ordering(130) 00:17:33.743 fused_ordering(131) 00:17:33.743 fused_ordering(132) 00:17:33.743 fused_ordering(133) 00:17:33.743 fused_ordering(134) 00:17:33.743 fused_ordering(135) 00:17:33.743 fused_ordering(136) 00:17:33.743 fused_ordering(137) 00:17:33.743 fused_ordering(138) 00:17:33.743 fused_ordering(139) 00:17:33.743 fused_ordering(140) 00:17:33.743 fused_ordering(141) 00:17:33.743 fused_ordering(142) 00:17:33.743 fused_ordering(143) 00:17:33.743 fused_ordering(144) 00:17:33.743 fused_ordering(145) 00:17:33.743 fused_ordering(146) 00:17:33.743 fused_ordering(147) 00:17:33.744 fused_ordering(148) 00:17:33.744 fused_ordering(149) 00:17:33.744 fused_ordering(150) 00:17:33.744 fused_ordering(151) 00:17:33.744 fused_ordering(152) 00:17:33.744 fused_ordering(153) 00:17:33.744 fused_ordering(154) 00:17:33.744 fused_ordering(155) 00:17:33.744 fused_ordering(156) 00:17:33.744 fused_ordering(157) 00:17:33.744 fused_ordering(158) 00:17:33.744 fused_ordering(159) 00:17:33.744 fused_ordering(160) 00:17:33.744 fused_ordering(161) 00:17:33.744 fused_ordering(162) 00:17:33.744 fused_ordering(163) 00:17:33.744 fused_ordering(164) 00:17:33.744 fused_ordering(165) 00:17:33.744 fused_ordering(166) 00:17:33.744 fused_ordering(167) 00:17:33.744 fused_ordering(168) 00:17:33.744 fused_ordering(169) 00:17:33.744 fused_ordering(170) 00:17:33.744 fused_ordering(171) 00:17:33.744 fused_ordering(172) 00:17:33.744 fused_ordering(173) 00:17:33.744 fused_ordering(174) 00:17:33.744 fused_ordering(175) 00:17:33.744 fused_ordering(176) 00:17:33.744 fused_ordering(177) 00:17:33.744 fused_ordering(178) 00:17:33.744 fused_ordering(179) 00:17:33.744 fused_ordering(180) 00:17:33.744 fused_ordering(181) 00:17:33.744 fused_ordering(182) 00:17:33.744 fused_ordering(183) 00:17:33.744 fused_ordering(184) 00:17:33.744 fused_ordering(185) 00:17:33.744 fused_ordering(186) 00:17:33.744 fused_ordering(187) 00:17:33.744 fused_ordering(188) 00:17:33.744 fused_ordering(189) 00:17:33.744 fused_ordering(190) 00:17:33.744 fused_ordering(191) 00:17:33.744 fused_ordering(192) 00:17:33.744 fused_ordering(193) 00:17:33.744 fused_ordering(194) 00:17:33.744 fused_ordering(195) 00:17:33.744 fused_ordering(196) 00:17:33.744 fused_ordering(197) 00:17:33.744 fused_ordering(198) 00:17:33.744 fused_ordering(199) 00:17:33.744 fused_ordering(200) 00:17:33.744 fused_ordering(201) 00:17:33.744 fused_ordering(202) 00:17:33.744 fused_ordering(203) 00:17:33.744 fused_ordering(204) 00:17:33.744 fused_ordering(205) 00:17:34.002 fused_ordering(206) 00:17:34.002 fused_ordering(207) 00:17:34.002 fused_ordering(208) 00:17:34.002 fused_ordering(209) 00:17:34.002 fused_ordering(210) 00:17:34.002 fused_ordering(211) 00:17:34.002 fused_ordering(212) 00:17:34.002 fused_ordering(213) 00:17:34.002 fused_ordering(214) 00:17:34.002 fused_ordering(215) 00:17:34.002 fused_ordering(216) 00:17:34.002 fused_ordering(217) 00:17:34.002 fused_ordering(218) 00:17:34.002 fused_ordering(219) 00:17:34.002 fused_ordering(220) 00:17:34.002 fused_ordering(221) 00:17:34.002 fused_ordering(222) 00:17:34.002 fused_ordering(223) 00:17:34.002 fused_ordering(224) 00:17:34.002 fused_ordering(225) 00:17:34.002 fused_ordering(226) 00:17:34.002 fused_ordering(227) 00:17:34.002 fused_ordering(228) 00:17:34.002 fused_ordering(229) 00:17:34.002 fused_ordering(230) 00:17:34.002 fused_ordering(231) 00:17:34.002 fused_ordering(232) 00:17:34.002 fused_ordering(233) 00:17:34.002 fused_ordering(234) 00:17:34.002 fused_ordering(235) 00:17:34.002 fused_ordering(236) 00:17:34.002 fused_ordering(237) 00:17:34.002 fused_ordering(238) 00:17:34.002 fused_ordering(239) 00:17:34.002 fused_ordering(240) 00:17:34.002 fused_ordering(241) 00:17:34.002 fused_ordering(242) 00:17:34.002 fused_ordering(243) 00:17:34.002 fused_ordering(244) 00:17:34.002 fused_ordering(245) 00:17:34.002 fused_ordering(246) 00:17:34.002 fused_ordering(247) 00:17:34.002 fused_ordering(248) 00:17:34.002 fused_ordering(249) 00:17:34.002 fused_ordering(250) 00:17:34.002 fused_ordering(251) 00:17:34.002 fused_ordering(252) 00:17:34.002 fused_ordering(253) 00:17:34.002 fused_ordering(254) 00:17:34.002 fused_ordering(255) 00:17:34.002 fused_ordering(256) 00:17:34.002 fused_ordering(257) 00:17:34.002 fused_ordering(258) 00:17:34.002 fused_ordering(259) 00:17:34.002 fused_ordering(260) 00:17:34.003 fused_ordering(261) 00:17:34.003 fused_ordering(262) 00:17:34.003 fused_ordering(263) 00:17:34.003 fused_ordering(264) 00:17:34.003 fused_ordering(265) 00:17:34.003 fused_ordering(266) 00:17:34.003 fused_ordering(267) 00:17:34.003 fused_ordering(268) 00:17:34.003 fused_ordering(269) 00:17:34.003 fused_ordering(270) 00:17:34.003 fused_ordering(271) 00:17:34.003 fused_ordering(272) 00:17:34.003 fused_ordering(273) 00:17:34.003 fused_ordering(274) 00:17:34.003 fused_ordering(275) 00:17:34.003 fused_ordering(276) 00:17:34.003 fused_ordering(277) 00:17:34.003 fused_ordering(278) 00:17:34.003 fused_ordering(279) 00:17:34.003 fused_ordering(280) 00:17:34.003 fused_ordering(281) 00:17:34.003 fused_ordering(282) 00:17:34.003 fused_ordering(283) 00:17:34.003 fused_ordering(284) 00:17:34.003 fused_ordering(285) 00:17:34.003 fused_ordering(286) 00:17:34.003 fused_ordering(287) 00:17:34.003 fused_ordering(288) 00:17:34.003 fused_ordering(289) 00:17:34.003 fused_ordering(290) 00:17:34.003 fused_ordering(291) 00:17:34.003 fused_ordering(292) 00:17:34.003 fused_ordering(293) 00:17:34.003 fused_ordering(294) 00:17:34.003 fused_ordering(295) 00:17:34.003 fused_ordering(296) 00:17:34.003 fused_ordering(297) 00:17:34.003 fused_ordering(298) 00:17:34.003 fused_ordering(299) 00:17:34.003 fused_ordering(300) 00:17:34.003 fused_ordering(301) 00:17:34.003 fused_ordering(302) 00:17:34.003 fused_ordering(303) 00:17:34.003 fused_ordering(304) 00:17:34.003 fused_ordering(305) 00:17:34.003 fused_ordering(306) 00:17:34.003 fused_ordering(307) 00:17:34.003 fused_ordering(308) 00:17:34.003 fused_ordering(309) 00:17:34.003 fused_ordering(310) 00:17:34.003 fused_ordering(311) 00:17:34.003 fused_ordering(312) 00:17:34.003 fused_ordering(313) 00:17:34.003 fused_ordering(314) 00:17:34.003 fused_ordering(315) 00:17:34.003 fused_ordering(316) 00:17:34.003 fused_ordering(317) 00:17:34.003 fused_ordering(318) 00:17:34.003 fused_ordering(319) 00:17:34.003 fused_ordering(320) 00:17:34.003 fused_ordering(321) 00:17:34.003 fused_ordering(322) 00:17:34.003 fused_ordering(323) 00:17:34.003 fused_ordering(324) 00:17:34.003 fused_ordering(325) 00:17:34.003 fused_ordering(326) 00:17:34.003 fused_ordering(327) 00:17:34.003 fused_ordering(328) 00:17:34.003 fused_ordering(329) 00:17:34.003 fused_ordering(330) 00:17:34.003 fused_ordering(331) 00:17:34.003 fused_ordering(332) 00:17:34.003 fused_ordering(333) 00:17:34.003 fused_ordering(334) 00:17:34.003 fused_ordering(335) 00:17:34.003 fused_ordering(336) 00:17:34.003 fused_ordering(337) 00:17:34.003 fused_ordering(338) 00:17:34.003 fused_ordering(339) 00:17:34.003 fused_ordering(340) 00:17:34.003 fused_ordering(341) 00:17:34.003 fused_ordering(342) 00:17:34.003 fused_ordering(343) 00:17:34.003 fused_ordering(344) 00:17:34.003 fused_ordering(345) 00:17:34.003 fused_ordering(346) 00:17:34.003 fused_ordering(347) 00:17:34.003 fused_ordering(348) 00:17:34.003 fused_ordering(349) 00:17:34.003 fused_ordering(350) 00:17:34.003 fused_ordering(351) 00:17:34.003 fused_ordering(352) 00:17:34.003 fused_ordering(353) 00:17:34.003 fused_ordering(354) 00:17:34.003 fused_ordering(355) 00:17:34.003 fused_ordering(356) 00:17:34.003 fused_ordering(357) 00:17:34.003 fused_ordering(358) 00:17:34.003 fused_ordering(359) 00:17:34.003 fused_ordering(360) 00:17:34.003 fused_ordering(361) 00:17:34.003 fused_ordering(362) 00:17:34.003 fused_ordering(363) 00:17:34.003 fused_ordering(364) 00:17:34.003 fused_ordering(365) 00:17:34.003 fused_ordering(366) 00:17:34.003 fused_ordering(367) 00:17:34.003 fused_ordering(368) 00:17:34.003 fused_ordering(369) 00:17:34.003 fused_ordering(370) 00:17:34.003 fused_ordering(371) 00:17:34.003 fused_ordering(372) 00:17:34.003 fused_ordering(373) 00:17:34.003 fused_ordering(374) 00:17:34.003 fused_ordering(375) 00:17:34.003 fused_ordering(376) 00:17:34.003 fused_ordering(377) 00:17:34.003 fused_ordering(378) 00:17:34.003 fused_ordering(379) 00:17:34.003 fused_ordering(380) 00:17:34.003 fused_ordering(381) 00:17:34.003 fused_ordering(382) 00:17:34.003 fused_ordering(383) 00:17:34.003 fused_ordering(384) 00:17:34.003 fused_ordering(385) 00:17:34.003 fused_ordering(386) 00:17:34.003 fused_ordering(387) 00:17:34.003 fused_ordering(388) 00:17:34.003 fused_ordering(389) 00:17:34.003 fused_ordering(390) 00:17:34.003 fused_ordering(391) 00:17:34.003 fused_ordering(392) 00:17:34.003 fused_ordering(393) 00:17:34.003 fused_ordering(394) 00:17:34.003 fused_ordering(395) 00:17:34.003 fused_ordering(396) 00:17:34.003 fused_ordering(397) 00:17:34.003 fused_ordering(398) 00:17:34.003 fused_ordering(399) 00:17:34.003 fused_ordering(400) 00:17:34.003 fused_ordering(401) 00:17:34.003 fused_ordering(402) 00:17:34.003 fused_ordering(403) 00:17:34.003 fused_ordering(404) 00:17:34.003 fused_ordering(405) 00:17:34.003 fused_ordering(406) 00:17:34.003 fused_ordering(407) 00:17:34.003 fused_ordering(408) 00:17:34.003 fused_ordering(409) 00:17:34.003 fused_ordering(410) 00:17:34.262 fused_ordering(411) 00:17:34.262 fused_ordering(412) 00:17:34.262 fused_ordering(413) 00:17:34.262 fused_ordering(414) 00:17:34.262 fused_ordering(415) 00:17:34.262 fused_ordering(416) 00:17:34.262 fused_ordering(417) 00:17:34.262 fused_ordering(418) 00:17:34.262 fused_ordering(419) 00:17:34.262 fused_ordering(420) 00:17:34.262 fused_ordering(421) 00:17:34.262 fused_ordering(422) 00:17:34.262 fused_ordering(423) 00:17:34.262 fused_ordering(424) 00:17:34.262 fused_ordering(425) 00:17:34.262 fused_ordering(426) 00:17:34.262 fused_ordering(427) 00:17:34.262 fused_ordering(428) 00:17:34.262 fused_ordering(429) 00:17:34.262 fused_ordering(430) 00:17:34.262 fused_ordering(431) 00:17:34.262 fused_ordering(432) 00:17:34.262 fused_ordering(433) 00:17:34.262 fused_ordering(434) 00:17:34.262 fused_ordering(435) 00:17:34.262 fused_ordering(436) 00:17:34.262 fused_ordering(437) 00:17:34.262 fused_ordering(438) 00:17:34.262 fused_ordering(439) 00:17:34.262 fused_ordering(440) 00:17:34.262 fused_ordering(441) 00:17:34.262 fused_ordering(442) 00:17:34.262 fused_ordering(443) 00:17:34.262 fused_ordering(444) 00:17:34.262 fused_ordering(445) 00:17:34.262 fused_ordering(446) 00:17:34.262 fused_ordering(447) 00:17:34.262 fused_ordering(448) 00:17:34.262 fused_ordering(449) 00:17:34.262 fused_ordering(450) 00:17:34.262 fused_ordering(451) 00:17:34.262 fused_ordering(452) 00:17:34.262 fused_ordering(453) 00:17:34.262 fused_ordering(454) 00:17:34.262 fused_ordering(455) 00:17:34.262 fused_ordering(456) 00:17:34.262 fused_ordering(457) 00:17:34.262 fused_ordering(458) 00:17:34.262 fused_ordering(459) 00:17:34.262 fused_ordering(460) 00:17:34.262 fused_ordering(461) 00:17:34.262 fused_ordering(462) 00:17:34.262 fused_ordering(463) 00:17:34.262 fused_ordering(464) 00:17:34.262 fused_ordering(465) 00:17:34.262 fused_ordering(466) 00:17:34.262 fused_ordering(467) 00:17:34.262 fused_ordering(468) 00:17:34.262 fused_ordering(469) 00:17:34.262 fused_ordering(470) 00:17:34.262 fused_ordering(471) 00:17:34.262 fused_ordering(472) 00:17:34.262 fused_ordering(473) 00:17:34.262 fused_ordering(474) 00:17:34.262 fused_ordering(475) 00:17:34.262 fused_ordering(476) 00:17:34.262 fused_ordering(477) 00:17:34.262 fused_ordering(478) 00:17:34.262 fused_ordering(479) 00:17:34.262 fused_ordering(480) 00:17:34.262 fused_ordering(481) 00:17:34.262 fused_ordering(482) 00:17:34.262 fused_ordering(483) 00:17:34.262 fused_ordering(484) 00:17:34.262 fused_ordering(485) 00:17:34.262 fused_ordering(486) 00:17:34.262 fused_ordering(487) 00:17:34.262 fused_ordering(488) 00:17:34.262 fused_ordering(489) 00:17:34.262 fused_ordering(490) 00:17:34.262 fused_ordering(491) 00:17:34.262 fused_ordering(492) 00:17:34.262 fused_ordering(493) 00:17:34.262 fused_ordering(494) 00:17:34.262 fused_ordering(495) 00:17:34.262 fused_ordering(496) 00:17:34.262 fused_ordering(497) 00:17:34.262 fused_ordering(498) 00:17:34.262 fused_ordering(499) 00:17:34.262 fused_ordering(500) 00:17:34.262 fused_ordering(501) 00:17:34.262 fused_ordering(502) 00:17:34.262 fused_ordering(503) 00:17:34.262 fused_ordering(504) 00:17:34.262 fused_ordering(505) 00:17:34.262 fused_ordering(506) 00:17:34.262 fused_ordering(507) 00:17:34.262 fused_ordering(508) 00:17:34.262 fused_ordering(509) 00:17:34.262 fused_ordering(510) 00:17:34.262 fused_ordering(511) 00:17:34.262 fused_ordering(512) 00:17:34.262 fused_ordering(513) 00:17:34.262 fused_ordering(514) 00:17:34.262 fused_ordering(515) 00:17:34.262 fused_ordering(516) 00:17:34.262 fused_ordering(517) 00:17:34.262 fused_ordering(518) 00:17:34.262 fused_ordering(519) 00:17:34.262 fused_ordering(520) 00:17:34.262 fused_ordering(521) 00:17:34.262 fused_ordering(522) 00:17:34.262 fused_ordering(523) 00:17:34.262 fused_ordering(524) 00:17:34.262 fused_ordering(525) 00:17:34.262 fused_ordering(526) 00:17:34.262 fused_ordering(527) 00:17:34.262 fused_ordering(528) 00:17:34.262 fused_ordering(529) 00:17:34.262 fused_ordering(530) 00:17:34.262 fused_ordering(531) 00:17:34.262 fused_ordering(532) 00:17:34.262 fused_ordering(533) 00:17:34.262 fused_ordering(534) 00:17:34.262 fused_ordering(535) 00:17:34.262 fused_ordering(536) 00:17:34.262 fused_ordering(537) 00:17:34.262 fused_ordering(538) 00:17:34.262 fused_ordering(539) 00:17:34.262 fused_ordering(540) 00:17:34.262 fused_ordering(541) 00:17:34.262 fused_ordering(542) 00:17:34.262 fused_ordering(543) 00:17:34.262 fused_ordering(544) 00:17:34.262 fused_ordering(545) 00:17:34.262 fused_ordering(546) 00:17:34.262 fused_ordering(547) 00:17:34.262 fused_ordering(548) 00:17:34.262 fused_ordering(549) 00:17:34.262 fused_ordering(550) 00:17:34.262 fused_ordering(551) 00:17:34.262 fused_ordering(552) 00:17:34.262 fused_ordering(553) 00:17:34.262 fused_ordering(554) 00:17:34.262 fused_ordering(555) 00:17:34.262 fused_ordering(556) 00:17:34.262 fused_ordering(557) 00:17:34.262 fused_ordering(558) 00:17:34.262 fused_ordering(559) 00:17:34.262 fused_ordering(560) 00:17:34.262 fused_ordering(561) 00:17:34.262 fused_ordering(562) 00:17:34.262 fused_ordering(563) 00:17:34.262 fused_ordering(564) 00:17:34.262 fused_ordering(565) 00:17:34.262 fused_ordering(566) 00:17:34.262 fused_ordering(567) 00:17:34.262 fused_ordering(568) 00:17:34.262 fused_ordering(569) 00:17:34.262 fused_ordering(570) 00:17:34.262 fused_ordering(571) 00:17:34.262 fused_ordering(572) 00:17:34.262 fused_ordering(573) 00:17:34.262 fused_ordering(574) 00:17:34.262 fused_ordering(575) 00:17:34.262 fused_ordering(576) 00:17:34.262 fused_ordering(577) 00:17:34.262 fused_ordering(578) 00:17:34.262 fused_ordering(579) 00:17:34.262 fused_ordering(580) 00:17:34.262 fused_ordering(581) 00:17:34.262 fused_ordering(582) 00:17:34.262 fused_ordering(583) 00:17:34.262 fused_ordering(584) 00:17:34.262 fused_ordering(585) 00:17:34.262 fused_ordering(586) 00:17:34.262 fused_ordering(587) 00:17:34.262 fused_ordering(588) 00:17:34.262 fused_ordering(589) 00:17:34.262 fused_ordering(590) 00:17:34.262 fused_ordering(591) 00:17:34.262 fused_ordering(592) 00:17:34.262 fused_ordering(593) 00:17:34.262 fused_ordering(594) 00:17:34.262 fused_ordering(595) 00:17:34.262 fused_ordering(596) 00:17:34.262 fused_ordering(597) 00:17:34.262 fused_ordering(598) 00:17:34.262 fused_ordering(599) 00:17:34.262 fused_ordering(600) 00:17:34.262 fused_ordering(601) 00:17:34.262 fused_ordering(602) 00:17:34.262 fused_ordering(603) 00:17:34.262 fused_ordering(604) 00:17:34.262 fused_ordering(605) 00:17:34.262 fused_ordering(606) 00:17:34.262 fused_ordering(607) 00:17:34.262 fused_ordering(608) 00:17:34.262 fused_ordering(609) 00:17:34.262 fused_ordering(610) 00:17:34.262 fused_ordering(611) 00:17:34.262 fused_ordering(612) 00:17:34.262 fused_ordering(613) 00:17:34.262 fused_ordering(614) 00:17:34.262 fused_ordering(615) 00:17:34.830 fused_ordering(616) 00:17:34.830 fused_ordering(617) 00:17:34.830 fused_ordering(618) 00:17:34.830 fused_ordering(619) 00:17:34.830 fused_ordering(620) 00:17:34.830 fused_ordering(621) 00:17:34.830 fused_ordering(622) 00:17:34.830 fused_ordering(623) 00:17:34.830 fused_ordering(624) 00:17:34.830 fused_ordering(625) 00:17:34.830 fused_ordering(626) 00:17:34.830 fused_ordering(627) 00:17:34.830 fused_ordering(628) 00:17:34.830 fused_ordering(629) 00:17:34.830 fused_ordering(630) 00:17:34.830 fused_ordering(631) 00:17:34.830 fused_ordering(632) 00:17:34.830 fused_ordering(633) 00:17:34.830 fused_ordering(634) 00:17:34.830 fused_ordering(635) 00:17:34.830 fused_ordering(636) 00:17:34.830 fused_ordering(637) 00:17:34.830 fused_ordering(638) 00:17:34.830 fused_ordering(639) 00:17:34.830 fused_ordering(640) 00:17:34.830 fused_ordering(641) 00:17:34.830 fused_ordering(642) 00:17:34.830 fused_ordering(643) 00:17:34.830 fused_ordering(644) 00:17:34.830 fused_ordering(645) 00:17:34.830 fused_ordering(646) 00:17:34.830 fused_ordering(647) 00:17:34.830 fused_ordering(648) 00:17:34.830 fused_ordering(649) 00:17:34.830 fused_ordering(650) 00:17:34.830 fused_ordering(651) 00:17:34.830 fused_ordering(652) 00:17:34.830 fused_ordering(653) 00:17:34.830 fused_ordering(654) 00:17:34.830 fused_ordering(655) 00:17:34.830 fused_ordering(656) 00:17:34.830 fused_ordering(657) 00:17:34.830 fused_ordering(658) 00:17:34.830 fused_ordering(659) 00:17:34.830 fused_ordering(660) 00:17:34.830 fused_ordering(661) 00:17:34.830 fused_ordering(662) 00:17:34.830 fused_ordering(663) 00:17:34.830 fused_ordering(664) 00:17:34.830 fused_ordering(665) 00:17:34.830 fused_ordering(666) 00:17:34.830 fused_ordering(667) 00:17:34.830 fused_ordering(668) 00:17:34.830 fused_ordering(669) 00:17:34.830 fused_ordering(670) 00:17:34.830 fused_ordering(671) 00:17:34.830 fused_ordering(672) 00:17:34.830 fused_ordering(673) 00:17:34.830 fused_ordering(674) 00:17:34.830 fused_ordering(675) 00:17:34.830 fused_ordering(676) 00:17:34.830 fused_ordering(677) 00:17:34.830 fused_ordering(678) 00:17:34.830 fused_ordering(679) 00:17:34.830 fused_ordering(680) 00:17:34.830 fused_ordering(681) 00:17:34.830 fused_ordering(682) 00:17:34.830 fused_ordering(683) 00:17:34.830 fused_ordering(684) 00:17:34.830 fused_ordering(685) 00:17:34.830 fused_ordering(686) 00:17:34.830 fused_ordering(687) 00:17:34.830 fused_ordering(688) 00:17:34.830 fused_ordering(689) 00:17:34.830 fused_ordering(690) 00:17:34.830 fused_ordering(691) 00:17:34.830 fused_ordering(692) 00:17:34.830 fused_ordering(693) 00:17:34.830 fused_ordering(694) 00:17:34.830 fused_ordering(695) 00:17:34.830 fused_ordering(696) 00:17:34.830 fused_ordering(697) 00:17:34.830 fused_ordering(698) 00:17:34.830 fused_ordering(699) 00:17:34.830 fused_ordering(700) 00:17:34.830 fused_ordering(701) 00:17:34.830 fused_ordering(702) 00:17:34.830 fused_ordering(703) 00:17:34.830 fused_ordering(704) 00:17:34.830 fused_ordering(705) 00:17:34.830 fused_ordering(706) 00:17:34.830 fused_ordering(707) 00:17:34.830 fused_ordering(708) 00:17:34.831 fused_ordering(709) 00:17:34.831 fused_ordering(710) 00:17:34.831 fused_ordering(711) 00:17:34.831 fused_ordering(712) 00:17:34.831 fused_ordering(713) 00:17:34.831 fused_ordering(714) 00:17:34.831 fused_ordering(715) 00:17:34.831 fused_ordering(716) 00:17:34.831 fused_ordering(717) 00:17:34.831 fused_ordering(718) 00:17:34.831 fused_ordering(719) 00:17:34.831 fused_ordering(720) 00:17:34.831 fused_ordering(721) 00:17:34.831 fused_ordering(722) 00:17:34.831 fused_ordering(723) 00:17:34.831 fused_ordering(724) 00:17:34.831 fused_ordering(725) 00:17:34.831 fused_ordering(726) 00:17:34.831 fused_ordering(727) 00:17:34.831 fused_ordering(728) 00:17:34.831 fused_ordering(729) 00:17:34.831 fused_ordering(730) 00:17:34.831 fused_ordering(731) 00:17:34.831 fused_ordering(732) 00:17:34.831 fused_ordering(733) 00:17:34.831 fused_ordering(734) 00:17:34.831 fused_ordering(735) 00:17:34.831 fused_ordering(736) 00:17:34.831 fused_ordering(737) 00:17:34.831 fused_ordering(738) 00:17:34.831 fused_ordering(739) 00:17:34.831 fused_ordering(740) 00:17:34.831 fused_ordering(741) 00:17:34.831 fused_ordering(742) 00:17:34.831 fused_ordering(743) 00:17:34.831 fused_ordering(744) 00:17:34.831 fused_ordering(745) 00:17:34.831 fused_ordering(746) 00:17:34.831 fused_ordering(747) 00:17:34.831 fused_ordering(748) 00:17:34.831 fused_ordering(749) 00:17:34.831 fused_ordering(750) 00:17:34.831 fused_ordering(751) 00:17:34.831 fused_ordering(752) 00:17:34.831 fused_ordering(753) 00:17:34.831 fused_ordering(754) 00:17:34.831 fused_ordering(755) 00:17:34.831 fused_ordering(756) 00:17:34.831 fused_ordering(757) 00:17:34.831 fused_ordering(758) 00:17:34.831 fused_ordering(759) 00:17:34.831 fused_ordering(760) 00:17:34.831 fused_ordering(761) 00:17:34.831 fused_ordering(762) 00:17:34.831 fused_ordering(763) 00:17:34.831 fused_ordering(764) 00:17:34.831 fused_ordering(765) 00:17:34.831 fused_ordering(766) 00:17:34.831 fused_ordering(767) 00:17:34.831 fused_ordering(768) 00:17:34.831 fused_ordering(769) 00:17:34.831 fused_ordering(770) 00:17:34.831 fused_ordering(771) 00:17:34.831 fused_ordering(772) 00:17:34.831 fused_ordering(773) 00:17:34.831 fused_ordering(774) 00:17:34.831 fused_ordering(775) 00:17:34.831 fused_ordering(776) 00:17:34.831 fused_ordering(777) 00:17:34.831 fused_ordering(778) 00:17:34.831 fused_ordering(779) 00:17:34.831 fused_ordering(780) 00:17:34.831 fused_ordering(781) 00:17:34.831 fused_ordering(782) 00:17:34.831 fused_ordering(783) 00:17:34.831 fused_ordering(784) 00:17:34.831 fused_ordering(785) 00:17:34.831 fused_ordering(786) 00:17:34.831 fused_ordering(787) 00:17:34.831 fused_ordering(788) 00:17:34.831 fused_ordering(789) 00:17:34.831 fused_ordering(790) 00:17:34.831 fused_ordering(791) 00:17:34.831 fused_ordering(792) 00:17:34.831 fused_ordering(793) 00:17:34.831 fused_ordering(794) 00:17:34.831 fused_ordering(795) 00:17:34.831 fused_ordering(796) 00:17:34.831 fused_ordering(797) 00:17:34.831 fused_ordering(798) 00:17:34.831 fused_ordering(799) 00:17:34.831 fused_ordering(800) 00:17:34.831 fused_ordering(801) 00:17:34.831 fused_ordering(802) 00:17:34.831 fused_ordering(803) 00:17:34.831 fused_ordering(804) 00:17:34.831 fused_ordering(805) 00:17:34.831 fused_ordering(806) 00:17:34.831 fused_ordering(807) 00:17:34.831 fused_ordering(808) 00:17:34.831 fused_ordering(809) 00:17:34.831 fused_ordering(810) 00:17:34.831 fused_ordering(811) 00:17:34.831 fused_ordering(812) 00:17:34.831 fused_ordering(813) 00:17:34.831 fused_ordering(814) 00:17:34.831 fused_ordering(815) 00:17:34.831 fused_ordering(816) 00:17:34.831 fused_ordering(817) 00:17:34.831 fused_ordering(818) 00:17:34.831 fused_ordering(819) 00:17:34.831 fused_ordering(820) 00:17:35.399 fused_o[2024-12-13 06:23:26.766871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x237da10 is same with the state(6) to be set 00:17:35.399 rdering(821) 00:17:35.399 fused_ordering(822) 00:17:35.399 fused_ordering(823) 00:17:35.399 fused_ordering(824) 00:17:35.399 fused_ordering(825) 00:17:35.399 fused_ordering(826) 00:17:35.399 fused_ordering(827) 00:17:35.399 fused_ordering(828) 00:17:35.399 fused_ordering(829) 00:17:35.399 fused_ordering(830) 00:17:35.399 fused_ordering(831) 00:17:35.399 fused_ordering(832) 00:17:35.399 fused_ordering(833) 00:17:35.399 fused_ordering(834) 00:17:35.399 fused_ordering(835) 00:17:35.399 fused_ordering(836) 00:17:35.399 fused_ordering(837) 00:17:35.399 fused_ordering(838) 00:17:35.399 fused_ordering(839) 00:17:35.399 fused_ordering(840) 00:17:35.399 fused_ordering(841) 00:17:35.399 fused_ordering(842) 00:17:35.399 fused_ordering(843) 00:17:35.399 fused_ordering(844) 00:17:35.399 fused_ordering(845) 00:17:35.399 fused_ordering(846) 00:17:35.399 fused_ordering(847) 00:17:35.399 fused_ordering(848) 00:17:35.399 fused_ordering(849) 00:17:35.399 fused_ordering(850) 00:17:35.399 fused_ordering(851) 00:17:35.399 fused_ordering(852) 00:17:35.399 fused_ordering(853) 00:17:35.399 fused_ordering(854) 00:17:35.399 fused_ordering(855) 00:17:35.399 fused_ordering(856) 00:17:35.399 fused_ordering(857) 00:17:35.399 fused_ordering(858) 00:17:35.399 fused_ordering(859) 00:17:35.399 fused_ordering(860) 00:17:35.399 fused_ordering(861) 00:17:35.399 fused_ordering(862) 00:17:35.399 fused_ordering(863) 00:17:35.399 fused_ordering(864) 00:17:35.399 fused_ordering(865) 00:17:35.399 fused_ordering(866) 00:17:35.399 fused_ordering(867) 00:17:35.399 fused_ordering(868) 00:17:35.399 fused_ordering(869) 00:17:35.399 fused_ordering(870) 00:17:35.399 fused_ordering(871) 00:17:35.399 fused_ordering(872) 00:17:35.399 fused_ordering(873) 00:17:35.399 fused_ordering(874) 00:17:35.399 fused_ordering(875) 00:17:35.399 fused_ordering(876) 00:17:35.399 fused_ordering(877) 00:17:35.399 fused_ordering(878) 00:17:35.399 fused_ordering(879) 00:17:35.399 fused_ordering(880) 00:17:35.399 fused_ordering(881) 00:17:35.399 fused_ordering(882) 00:17:35.399 fused_ordering(883) 00:17:35.399 fused_ordering(884) 00:17:35.399 fused_ordering(885) 00:17:35.399 fused_ordering(886) 00:17:35.399 fused_ordering(887) 00:17:35.399 fused_ordering(888) 00:17:35.399 fused_ordering(889) 00:17:35.399 fused_ordering(890) 00:17:35.399 fused_ordering(891) 00:17:35.399 fused_ordering(892) 00:17:35.399 fused_ordering(893) 00:17:35.399 fused_ordering(894) 00:17:35.399 fused_ordering(895) 00:17:35.399 fused_ordering(896) 00:17:35.399 fused_ordering(897) 00:17:35.399 fused_ordering(898) 00:17:35.399 fused_ordering(899) 00:17:35.399 fused_ordering(900) 00:17:35.399 fused_ordering(901) 00:17:35.399 fused_ordering(902) 00:17:35.399 fused_ordering(903) 00:17:35.399 fused_ordering(904) 00:17:35.399 fused_ordering(905) 00:17:35.399 fused_ordering(906) 00:17:35.399 fused_ordering(907) 00:17:35.399 fused_ordering(908) 00:17:35.399 fused_ordering(909) 00:17:35.399 fused_ordering(910) 00:17:35.399 fused_ordering(911) 00:17:35.399 fused_ordering(912) 00:17:35.399 fused_ordering(913) 00:17:35.399 fused_ordering(914) 00:17:35.399 fused_ordering(915) 00:17:35.399 fused_ordering(916) 00:17:35.399 fused_ordering(917) 00:17:35.399 fused_ordering(918) 00:17:35.399 fused_ordering(919) 00:17:35.399 fused_ordering(920) 00:17:35.399 fused_ordering(921) 00:17:35.399 fused_ordering(922) 00:17:35.399 fused_ordering(923) 00:17:35.399 fused_ordering(924) 00:17:35.399 fused_ordering(925) 00:17:35.399 fused_ordering(926) 00:17:35.399 fused_ordering(927) 00:17:35.399 fused_ordering(928) 00:17:35.399 fused_ordering(929) 00:17:35.399 fused_ordering(930) 00:17:35.399 fused_ordering(931) 00:17:35.399 fused_ordering(932) 00:17:35.399 fused_ordering(933) 00:17:35.399 fused_ordering(934) 00:17:35.399 fused_ordering(935) 00:17:35.399 fused_ordering(936) 00:17:35.399 fused_ordering(937) 00:17:35.399 fused_ordering(938) 00:17:35.399 fused_ordering(939) 00:17:35.399 fused_ordering(940) 00:17:35.399 fused_ordering(941) 00:17:35.399 fused_ordering(942) 00:17:35.399 fused_ordering(943) 00:17:35.399 fused_ordering(944) 00:17:35.399 fused_ordering(945) 00:17:35.400 fused_ordering(946) 00:17:35.400 fused_ordering(947) 00:17:35.400 fused_ordering(948) 00:17:35.400 fused_ordering(949) 00:17:35.400 fused_ordering(950) 00:17:35.400 fused_ordering(951) 00:17:35.400 fused_ordering(952) 00:17:35.400 fused_ordering(953) 00:17:35.400 fused_ordering(954) 00:17:35.400 fused_ordering(955) 00:17:35.400 fused_ordering(956) 00:17:35.400 fused_ordering(957) 00:17:35.400 fused_ordering(958) 00:17:35.400 fused_ordering(959) 00:17:35.400 fused_ordering(960) 00:17:35.400 fused_ordering(961) 00:17:35.400 fused_ordering(962) 00:17:35.400 fused_ordering(963) 00:17:35.400 fused_ordering(964) 00:17:35.400 fused_ordering(965) 00:17:35.400 fused_ordering(966) 00:17:35.400 fused_ordering(967) 00:17:35.400 fused_ordering(968) 00:17:35.400 fused_ordering(969) 00:17:35.400 fused_ordering(970) 00:17:35.400 fused_ordering(971) 00:17:35.400 fused_ordering(972) 00:17:35.400 fused_ordering(973) 00:17:35.400 fused_ordering(974) 00:17:35.400 fused_ordering(975) 00:17:35.400 fused_ordering(976) 00:17:35.400 fused_ordering(977) 00:17:35.400 fused_ordering(978) 00:17:35.400 fused_ordering(979) 00:17:35.400 fused_ordering(980) 00:17:35.400 fused_ordering(981) 00:17:35.400 fused_ordering(982) 00:17:35.400 fused_ordering(983) 00:17:35.400 fused_ordering(984) 00:17:35.400 fused_ordering(985) 00:17:35.400 fused_ordering(986) 00:17:35.400 fused_ordering(987) 00:17:35.400 fused_ordering(988) 00:17:35.400 fused_ordering(989) 00:17:35.400 fused_ordering(990) 00:17:35.400 fused_ordering(991) 00:17:35.400 fused_ordering(992) 00:17:35.400 fused_ordering(993) 00:17:35.400 fused_ordering(994) 00:17:35.400 fused_ordering(995) 00:17:35.400 fused_ordering(996) 00:17:35.400 fused_ordering(997) 00:17:35.400 fused_ordering(998) 00:17:35.400 fused_ordering(999) 00:17:35.400 fused_ordering(1000) 00:17:35.400 fused_ordering(1001) 00:17:35.400 fused_ordering(1002) 00:17:35.400 fused_ordering(1003) 00:17:35.400 fused_ordering(1004) 00:17:35.400 fused_ordering(1005) 00:17:35.400 fused_ordering(1006) 00:17:35.400 fused_ordering(1007) 00:17:35.400 fused_ordering(1008) 00:17:35.400 fused_ordering(1009) 00:17:35.400 fused_ordering(1010) 00:17:35.400 fused_ordering(1011) 00:17:35.400 fused_ordering(1012) 00:17:35.400 fused_ordering(1013) 00:17:35.400 fused_ordering(1014) 00:17:35.400 fused_ordering(1015) 00:17:35.400 fused_ordering(1016) 00:17:35.400 fused_ordering(1017) 00:17:35.400 fused_ordering(1018) 00:17:35.400 fused_ordering(1019) 00:17:35.400 fused_ordering(1020) 00:17:35.400 fused_ordering(1021) 00:17:35.400 fused_ordering(1022) 00:17:35.400 fused_ordering(1023) 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:35.400 rmmod nvme_tcp 00:17:35.400 rmmod nvme_fabrics 00:17:35.400 rmmod nvme_keyring 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 952674 ']' 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 952674 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 952674 ']' 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 952674 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 952674 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 952674' 00:17:35.400 killing process with pid 952674 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 952674 00:17:35.400 06:23:26 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 952674 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:35.659 06:23:27 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:37.656 00:17:37.656 real 0m10.587s 00:17:37.656 user 0m4.964s 00:17:37.656 sys 0m5.724s 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:37.656 ************************************ 00:17:37.656 END TEST nvmf_fused_ordering 00:17:37.656 ************************************ 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:37.656 ************************************ 00:17:37.656 START TEST nvmf_ns_masking 00:17:37.656 ************************************ 00:17:37.656 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:37.656 * Looking for test storage... 00:17:37.916 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.916 --rc genhtml_branch_coverage=1 00:17:37.916 --rc genhtml_function_coverage=1 00:17:37.916 --rc genhtml_legend=1 00:17:37.916 --rc geninfo_all_blocks=1 00:17:37.916 --rc geninfo_unexecuted_blocks=1 00:17:37.916 00:17:37.916 ' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.916 --rc genhtml_branch_coverage=1 00:17:37.916 --rc genhtml_function_coverage=1 00:17:37.916 --rc genhtml_legend=1 00:17:37.916 --rc geninfo_all_blocks=1 00:17:37.916 --rc geninfo_unexecuted_blocks=1 00:17:37.916 00:17:37.916 ' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.916 --rc genhtml_branch_coverage=1 00:17:37.916 --rc genhtml_function_coverage=1 00:17:37.916 --rc genhtml_legend=1 00:17:37.916 --rc geninfo_all_blocks=1 00:17:37.916 --rc geninfo_unexecuted_blocks=1 00:17:37.916 00:17:37.916 ' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:37.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:37.916 --rc genhtml_branch_coverage=1 00:17:37.916 --rc genhtml_function_coverage=1 00:17:37.916 --rc genhtml_legend=1 00:17:37.916 --rc geninfo_all_blocks=1 00:17:37.916 --rc geninfo_unexecuted_blocks=1 00:17:37.916 00:17:37.916 ' 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.916 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:37.917 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=84079eb4-c3cd-4ae6-a10c-3687a0bfab65 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=23110212-c7e6-45d2-a009-a6bff1deb75d 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=6634fa1e-b17d-4ad4-8ee9-39552f08ba91 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:37.917 06:23:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:44.488 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:44.488 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:44.488 Found net devices under 0000:af:00.0: cvl_0_0 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:44.488 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:44.489 Found net devices under 0000:af:00.1: cvl_0_1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:44.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:17:44.489 00:17:44.489 --- 10.0.0.2 ping statistics --- 00:17:44.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.489 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:17:44.489 00:17:44.489 --- 10.0.0.1 ping statistics --- 00:17:44.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.489 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=956529 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 956529 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 956529 ']' 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.489 [2024-12-13 06:23:35.399154] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:44.489 [2024-12-13 06:23:35.399198] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.489 [2024-12-13 06:23:35.479855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.489 [2024-12-13 06:23:35.501567] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.489 [2024-12-13 06:23:35.501601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.489 [2024-12-13 06:23:35.501608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.489 [2024-12-13 06:23:35.501614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.489 [2024-12-13 06:23:35.501619] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.489 [2024-12-13 06:23:35.502082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:44.489 [2024-12-13 06:23:35.797516] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:44.489 06:23:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:44.489 Malloc1 00:17:44.489 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:44.748 Malloc2 00:17:44.748 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:45.007 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:45.007 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:45.264 [2024-12-13 06:23:36.764528] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.264 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:45.264 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6634fa1e-b17d-4ad4-8ee9-39552f08ba91 -a 10.0.0.2 -s 4420 -i 4 00:17:45.523 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:45.523 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:45.523 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:45.523 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:45.523 06:23:36 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:47.425 06:23:38 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.425 [ 0]:0x1 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0a71a63addc410084a8e0b23aa2adca 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0a71a63addc410084a8e0b23aa2adca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.425 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:47.684 [ 0]:0x1 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0a71a63addc410084a8e0b23aa2adca 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0a71a63addc410084a8e0b23aa2adca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:47.684 [ 1]:0x2 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:47.684 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:47.943 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:47.943 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:47.943 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:17:47.943 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:47.943 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:47.943 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:48.201 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:17:48.459 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:17:48.459 06:23:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6634fa1e-b17d-4ad4-8ee9-39552f08ba91 -a 10.0.0.2 -s 4420 -i 4 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:17:48.717 06:23:40 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:50.621 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:50.880 [ 0]:0x2 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:50.880 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.139 [ 0]:0x1 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0a71a63addc410084a8e0b23aa2adca 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0a71a63addc410084a8e0b23aa2adca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.139 [ 1]:0x2 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.139 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:51.399 [ 0]:0x2 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:17:51.399 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:51.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.399 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:51.658 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:17:51.658 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 6634fa1e-b17d-4ad4-8ee9-39552f08ba91 -a 10.0.0.2 -s 4420 -i 4 00:17:51.916 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:51.917 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:51.917 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:51.917 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:17:51.917 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:17:51.917 06:23:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:17:53.820 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.078 [ 0]:0x1 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d0a71a63addc410084a8e0b23aa2adca 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d0a71a63addc410084a8e0b23aa2adca != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.078 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.337 [ 1]:0x2 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.337 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.596 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.596 [ 0]:0x2 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:54.596 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:17:54.855 [2024-12-13 06:23:46.259815] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:17:54.855 request: 00:17:54.855 { 00:17:54.855 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:54.855 "nsid": 2, 00:17:54.855 "host": "nqn.2016-06.io.spdk:host1", 00:17:54.855 "method": "nvmf_ns_remove_host", 00:17:54.855 "req_id": 1 00:17:54.855 } 00:17:54.855 Got JSON-RPC error response 00:17:54.855 response: 00:17:54.855 { 00:17:54.855 "code": -32602, 00:17:54.855 "message": "Invalid parameters" 00:17:54.855 } 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:17:54.855 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:17:54.856 [ 0]:0x2 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=e5f3e29318df4689b3df4ca2965bf741 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ e5f3e29318df4689b3df4ca2965bf741 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:54.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=958478 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 958478 /var/tmp/host.sock 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 958478 ']' 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:54.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:54.856 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:54.856 [2024-12-13 06:23:46.479065] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:17:54.856 [2024-12-13 06:23:46.479114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid958478 ] 00:17:55.114 [2024-12-13 06:23:46.554363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.115 [2024-12-13 06:23:46.576613] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:55.373 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.373 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:55.373 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:55.373 06:23:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:55.631 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 84079eb4-c3cd-4ae6-a10c-3687a0bfab65 00:17:55.631 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:55.631 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84079EB4C3CD4AE6A10C3687A0BFAB65 -i 00:17:55.890 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 23110212-c7e6-45d2-a009-a6bff1deb75d 00:17:55.890 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:55.890 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 23110212C7E645D2A009A6BFF1DEB75D -i 00:17:56.149 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:17:56.149 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:17:56.408 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.408 06:23:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:17:56.666 nvme0n1 00:17:56.666 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:56.666 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:17:57.234 nvme1n2 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:17:57.234 06:23:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:17:57.492 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 84079eb4-c3cd-4ae6-a10c-3687a0bfab65 == \8\4\0\7\9\e\b\4\-\c\3\c\d\-\4\a\e\6\-\a\1\0\c\-\3\6\8\7\a\0\b\f\a\b\6\5 ]] 00:17:57.492 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:17:57.492 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:17:57.492 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:17:57.751 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 23110212-c7e6-45d2-a009-a6bff1deb75d == \2\3\1\1\0\2\1\2\-\c\7\e\6\-\4\5\d\2\-\a\0\0\9\-\a\6\b\f\f\1\d\e\b\7\5\d ]] 00:17:57.751 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 84079eb4-c3cd-4ae6-a10c-3687a0bfab65 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84079EB4C3CD4AE6A10C3687A0BFAB65 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84079EB4C3CD4AE6A10C3687A0BFAB65 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:58.010 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 84079EB4C3CD4AE6A10C3687A0BFAB65 00:17:58.269 [2024-12-13 06:23:49.801733] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:17:58.269 [2024-12-13 06:23:49.801765] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:17:58.269 [2024-12-13 06:23:49.801774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:58.269 request: 00:17:58.269 { 00:17:58.269 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.269 "namespace": { 00:17:58.269 "bdev_name": "invalid", 00:17:58.269 "nsid": 1, 00:17:58.269 "nguid": "84079EB4C3CD4AE6A10C3687A0BFAB65", 00:17:58.269 "no_auto_visible": false, 00:17:58.269 "hide_metadata": false 00:17:58.269 }, 00:17:58.269 "method": "nvmf_subsystem_add_ns", 00:17:58.269 "req_id": 1 00:17:58.269 } 00:17:58.269 Got JSON-RPC error response 00:17:58.269 response: 00:17:58.269 { 00:17:58.269 "code": -32602, 00:17:58.269 "message": "Invalid parameters" 00:17:58.269 } 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 84079eb4-c3cd-4ae6-a10c-3687a0bfab65 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:17:58.269 06:23:49 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 84079EB4C3CD4AE6A10C3687A0BFAB65 -i 00:17:58.527 06:23:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:00.430 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:00.430 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:00.430 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 958478 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 958478 ']' 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 958478 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 958478 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 958478' 00:18:00.689 killing process with pid 958478 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 958478 00:18:00.689 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 958478 00:18:00.948 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:01.206 rmmod nvme_tcp 00:18:01.206 rmmod nvme_fabrics 00:18:01.206 rmmod nvme_keyring 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 956529 ']' 00:18:01.206 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 956529 00:18:01.207 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 956529 ']' 00:18:01.207 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 956529 00:18:01.207 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:01.207 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.207 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 956529 00:18:01.466 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.466 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.466 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 956529' 00:18:01.466 killing process with pid 956529 00:18:01.466 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 956529 00:18:01.466 06:23:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 956529 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:01.466 06:23:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:04.002 00:18:04.002 real 0m25.943s 00:18:04.002 user 0m30.899s 00:18:04.002 sys 0m6.998s 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:04.002 ************************************ 00:18:04.002 END TEST nvmf_ns_masking 00:18:04.002 ************************************ 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:04.002 ************************************ 00:18:04.002 START TEST nvmf_nvme_cli 00:18:04.002 ************************************ 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:04.002 * Looking for test storage... 00:18:04.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:04.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.002 --rc genhtml_branch_coverage=1 00:18:04.002 --rc genhtml_function_coverage=1 00:18:04.002 --rc genhtml_legend=1 00:18:04.002 --rc geninfo_all_blocks=1 00:18:04.002 --rc geninfo_unexecuted_blocks=1 00:18:04.002 00:18:04.002 ' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:04.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.002 --rc genhtml_branch_coverage=1 00:18:04.002 --rc genhtml_function_coverage=1 00:18:04.002 --rc genhtml_legend=1 00:18:04.002 --rc geninfo_all_blocks=1 00:18:04.002 --rc geninfo_unexecuted_blocks=1 00:18:04.002 00:18:04.002 ' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:04.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.002 --rc genhtml_branch_coverage=1 00:18:04.002 --rc genhtml_function_coverage=1 00:18:04.002 --rc genhtml_legend=1 00:18:04.002 --rc geninfo_all_blocks=1 00:18:04.002 --rc geninfo_unexecuted_blocks=1 00:18:04.002 00:18:04.002 ' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:04.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:04.002 --rc genhtml_branch_coverage=1 00:18:04.002 --rc genhtml_function_coverage=1 00:18:04.002 --rc genhtml_legend=1 00:18:04.002 --rc geninfo_all_blocks=1 00:18:04.002 --rc geninfo_unexecuted_blocks=1 00:18:04.002 00:18:04.002 ' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.002 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:04.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:04.003 06:23:55 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:10.567 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:10.567 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.567 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:10.568 Found net devices under 0000:af:00.0: cvl_0_0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:10.568 Found net devices under 0000:af:00.1: cvl_0_1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:10.568 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:10.568 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.273 ms 00:18:10.568 00:18:10.568 --- 10.0.0.2 ping statistics --- 00:18:10.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.568 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:10.568 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:10.568 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:18:10.568 00:18:10.568 --- 10.0.0.1 ping statistics --- 00:18:10.568 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:10.568 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=963171 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 963171 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 963171 ']' 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:10.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 [2024-12-13 06:24:01.396717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:10.568 [2024-12-13 06:24:01.396761] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:10.568 [2024-12-13 06:24:01.477465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:10.568 [2024-12-13 06:24:01.501168] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:10.568 [2024-12-13 06:24:01.501206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:10.568 [2024-12-13 06:24:01.501213] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:10.568 [2024-12-13 06:24:01.501219] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:10.568 [2024-12-13 06:24:01.501224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:10.568 [2024-12-13 06:24:01.502720] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.568 [2024-12-13 06:24:01.502830] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:10.568 [2024-12-13 06:24:01.502920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:10.568 [2024-12-13 06:24:01.502921] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 [2024-12-13 06:24:01.634532] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 Malloc0 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.568 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.568 Malloc1 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.569 [2024-12-13 06:24:01.729680] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:10.569 00:18:10.569 Discovery Log Number of Records 2, Generation counter 2 00:18:10.569 =====Discovery Log Entry 0====== 00:18:10.569 trtype: tcp 00:18:10.569 adrfam: ipv4 00:18:10.569 subtype: current discovery subsystem 00:18:10.569 treq: not required 00:18:10.569 portid: 0 00:18:10.569 trsvcid: 4420 00:18:10.569 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:10.569 traddr: 10.0.0.2 00:18:10.569 eflags: explicit discovery connections, duplicate discovery information 00:18:10.569 sectype: none 00:18:10.569 =====Discovery Log Entry 1====== 00:18:10.569 trtype: tcp 00:18:10.569 adrfam: ipv4 00:18:10.569 subtype: nvme subsystem 00:18:10.569 treq: not required 00:18:10.569 portid: 0 00:18:10.569 trsvcid: 4420 00:18:10.569 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:10.569 traddr: 10.0.0.2 00:18:10.569 eflags: none 00:18:10.569 sectype: none 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:10.569 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:11.501 06:24:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.397 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:13.661 /dev/nvme0n2 ]] 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.661 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:13.920 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:14.178 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:14.178 rmmod nvme_tcp 00:18:14.178 rmmod nvme_fabrics 00:18:14.178 rmmod nvme_keyring 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 963171 ']' 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 963171 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 963171 ']' 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 963171 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 963171 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 963171' 00:18:14.178 killing process with pid 963171 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 963171 00:18:14.178 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 963171 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.437 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:16.974 00:18:16.974 real 0m12.799s 00:18:16.974 user 0m19.433s 00:18:16.974 sys 0m5.038s 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:16.974 ************************************ 00:18:16.974 END TEST nvmf_nvme_cli 00:18:16.974 ************************************ 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:16.974 ************************************ 00:18:16.974 START TEST nvmf_vfio_user 00:18:16.974 ************************************ 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:16.974 * Looking for test storage... 00:18:16.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:16.974 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:16.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.975 --rc genhtml_branch_coverage=1 00:18:16.975 --rc genhtml_function_coverage=1 00:18:16.975 --rc genhtml_legend=1 00:18:16.975 --rc geninfo_all_blocks=1 00:18:16.975 --rc geninfo_unexecuted_blocks=1 00:18:16.975 00:18:16.975 ' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:16.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.975 --rc genhtml_branch_coverage=1 00:18:16.975 --rc genhtml_function_coverage=1 00:18:16.975 --rc genhtml_legend=1 00:18:16.975 --rc geninfo_all_blocks=1 00:18:16.975 --rc geninfo_unexecuted_blocks=1 00:18:16.975 00:18:16.975 ' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:16.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.975 --rc genhtml_branch_coverage=1 00:18:16.975 --rc genhtml_function_coverage=1 00:18:16.975 --rc genhtml_legend=1 00:18:16.975 --rc geninfo_all_blocks=1 00:18:16.975 --rc geninfo_unexecuted_blocks=1 00:18:16.975 00:18:16.975 ' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:16.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:16.975 --rc genhtml_branch_coverage=1 00:18:16.975 --rc genhtml_function_coverage=1 00:18:16.975 --rc genhtml_legend=1 00:18:16.975 --rc geninfo_all_blocks=1 00:18:16.975 --rc geninfo_unexecuted_blocks=1 00:18:16.975 00:18:16.975 ' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:16.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=964473 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 964473' 00:18:16.975 Process pid: 964473 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 964473 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 964473 ']' 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:16.975 [2024-12-13 06:24:08.371177] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:16.975 [2024-12-13 06:24:08.371228] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:16.975 [2024-12-13 06:24:08.446422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:16.975 [2024-12-13 06:24:08.469709] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:16.975 [2024-12-13 06:24:08.469743] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:16.975 [2024-12-13 06:24:08.469751] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:16.975 [2024-12-13 06:24:08.469757] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:16.975 [2024-12-13 06:24:08.469762] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:16.975 [2024-12-13 06:24:08.471221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.975 [2024-12-13 06:24:08.471338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:16.975 [2024-12-13 06:24:08.471444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:16.975 [2024-12-13 06:24:08.471445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:16.975 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.976 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:16.976 06:24:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:18.348 06:24:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:18.606 Malloc1 00:18:18.606 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:18.606 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:18.863 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:19.120 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:19.120 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:19.120 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:19.378 Malloc2 00:18:19.378 06:24:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:19.636 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:19.636 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:19.893 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:19.893 [2024-12-13 06:24:11.486744] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:19.893 [2024-12-13 06:24:11.486787] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid965342 ] 00:18:19.893 [2024-12-13 06:24:11.529893] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:19.894 [2024-12-13 06:24:11.537737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.894 [2024-12-13 06:24:11.537756] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f54f3fea000 00:18:19.894 [2024-12-13 06:24:11.538736] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.539734] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.540737] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.541743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.542743] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.543752] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.544757] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.545762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:19.894 [2024-12-13 06:24:11.546775] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:19.894 [2024-12-13 06:24:11.546784] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f54f2cf4000 00:18:19.894 [2024-12-13 06:24:11.547700] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:20.153 [2024-12-13 06:24:11.561138] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:20.153 [2024-12-13 06:24:11.561163] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:20.153 [2024-12-13 06:24:11.563888] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:20.153 [2024-12-13 06:24:11.563922] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:20.153 [2024-12-13 06:24:11.563990] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:20.153 [2024-12-13 06:24:11.564005] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:20.153 [2024-12-13 06:24:11.564010] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:20.153 [2024-12-13 06:24:11.564880] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:20.153 [2024-12-13 06:24:11.564888] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:20.153 [2024-12-13 06:24:11.564895] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:20.153 [2024-12-13 06:24:11.565887] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:20.153 [2024-12-13 06:24:11.565897] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:20.153 [2024-12-13 06:24:11.565904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:20.153 [2024-12-13 06:24:11.566894] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:20.153 [2024-12-13 06:24:11.566901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:20.154 [2024-12-13 06:24:11.567898] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:20.154 [2024-12-13 06:24:11.567904] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:20.154 [2024-12-13 06:24:11.567909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:20.154 [2024-12-13 06:24:11.567914] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:20.154 [2024-12-13 06:24:11.568021] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:20.154 [2024-12-13 06:24:11.568025] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:20.154 [2024-12-13 06:24:11.568030] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:20.154 [2024-12-13 06:24:11.568901] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:20.154 [2024-12-13 06:24:11.569910] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:20.154 [2024-12-13 06:24:11.570915] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:20.154 [2024-12-13 06:24:11.571912] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:20.154 [2024-12-13 06:24:11.571999] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:20.154 [2024-12-13 06:24:11.572923] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:20.154 [2024-12-13 06:24:11.572930] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:20.154 [2024-12-13 06:24:11.572934] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.572950] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:20.154 [2024-12-13 06:24:11.572960] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.572970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.154 [2024-12-13 06:24:11.572975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.154 [2024-12-13 06:24:11.572978] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.154 [2024-12-13 06:24:11.572994] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573056] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:20.154 [2024-12-13 06:24:11.573060] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:20.154 [2024-12-13 06:24:11.573064] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:20.154 [2024-12-13 06:24:11.573068] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:20.154 [2024-12-13 06:24:11.573072] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:20.154 [2024-12-13 06:24:11.573076] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:20.154 [2024-12-13 06:24:11.573080] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573089] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573098] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573121] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.154 [2024-12-13 06:24:11.573129] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.154 [2024-12-13 06:24:11.573136] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.154 [2024-12-13 06:24:11.573143] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:20.154 [2024-12-13 06:24:11.573147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573154] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573179] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:20.154 [2024-12-13 06:24:11.573183] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573189] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573194] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573276] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:20.154 [2024-12-13 06:24:11.573280] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:20.154 [2024-12-13 06:24:11.573283] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.154 [2024-12-13 06:24:11.573289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573309] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:20.154 [2024-12-13 06:24:11.573316] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573329] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.154 [2024-12-13 06:24:11.573332] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.154 [2024-12-13 06:24:11.573335] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.154 [2024-12-13 06:24:11.573341] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573383] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:20.154 [2024-12-13 06:24:11.573387] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.154 [2024-12-13 06:24:11.573390] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.154 [2024-12-13 06:24:11.573395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573413] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573418] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573437] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573441] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573445] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:20.154 [2024-12-13 06:24:11.573454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:20.154 [2024-12-13 06:24:11.573459] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:20.154 [2024-12-13 06:24:11.573475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:20.154 [2024-12-13 06:24:11.573494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:20.154 [2024-12-13 06:24:11.573502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573511] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:20.155 [2024-12-13 06:24:11.573519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:20.155 [2024-12-13 06:24:11.573541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573552] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:20.155 [2024-12-13 06:24:11.573556] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:20.155 [2024-12-13 06:24:11.573559] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:20.155 [2024-12-13 06:24:11.573562] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:20.155 [2024-12-13 06:24:11.573565] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:20.155 [2024-12-13 06:24:11.573570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:20.155 [2024-12-13 06:24:11.573576] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:20.155 [2024-12-13 06:24:11.573580] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:20.155 [2024-12-13 06:24:11.573583] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.155 [2024-12-13 06:24:11.573588] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:20.155 [2024-12-13 06:24:11.573594] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:20.155 [2024-12-13 06:24:11.573598] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:20.155 [2024-12-13 06:24:11.573601] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.155 [2024-12-13 06:24:11.573607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:20.155 [2024-12-13 06:24:11.573614] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:20.155 [2024-12-13 06:24:11.573617] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:20.155 [2024-12-13 06:24:11.573620] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:20.155 [2024-12-13 06:24:11.573626] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:20.155 [2024-12-13 06:24:11.573632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:20.155 [2024-12-13 06:24:11.573658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:20.155 ===================================================== 00:18:20.155 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:20.155 ===================================================== 00:18:20.155 Controller Capabilities/Features 00:18:20.155 ================================ 00:18:20.155 Vendor ID: 4e58 00:18:20.155 Subsystem Vendor ID: 4e58 00:18:20.155 Serial Number: SPDK1 00:18:20.155 Model Number: SPDK bdev Controller 00:18:20.155 Firmware Version: 25.01 00:18:20.155 Recommended Arb Burst: 6 00:18:20.155 IEEE OUI Identifier: 8d 6b 50 00:18:20.155 Multi-path I/O 00:18:20.155 May have multiple subsystem ports: Yes 00:18:20.155 May have multiple controllers: Yes 00:18:20.155 Associated with SR-IOV VF: No 00:18:20.155 Max Data Transfer Size: 131072 00:18:20.155 Max Number of Namespaces: 32 00:18:20.155 Max Number of I/O Queues: 127 00:18:20.155 NVMe Specification Version (VS): 1.3 00:18:20.155 NVMe Specification Version (Identify): 1.3 00:18:20.155 Maximum Queue Entries: 256 00:18:20.155 Contiguous Queues Required: Yes 00:18:20.155 Arbitration Mechanisms Supported 00:18:20.155 Weighted Round Robin: Not Supported 00:18:20.155 Vendor Specific: Not Supported 00:18:20.155 Reset Timeout: 15000 ms 00:18:20.155 Doorbell Stride: 4 bytes 00:18:20.155 NVM Subsystem Reset: Not Supported 00:18:20.155 Command Sets Supported 00:18:20.155 NVM Command Set: Supported 00:18:20.155 Boot Partition: Not Supported 00:18:20.155 Memory Page Size Minimum: 4096 bytes 00:18:20.155 Memory Page Size Maximum: 4096 bytes 00:18:20.155 Persistent Memory Region: Not Supported 00:18:20.155 Optional Asynchronous Events Supported 00:18:20.155 Namespace Attribute Notices: Supported 00:18:20.155 Firmware Activation Notices: Not Supported 00:18:20.155 ANA Change Notices: Not Supported 00:18:20.155 PLE Aggregate Log Change Notices: Not Supported 00:18:20.155 LBA Status Info Alert Notices: Not Supported 00:18:20.155 EGE Aggregate Log Change Notices: Not Supported 00:18:20.155 Normal NVM Subsystem Shutdown event: Not Supported 00:18:20.155 Zone Descriptor Change Notices: Not Supported 00:18:20.155 Discovery Log Change Notices: Not Supported 00:18:20.155 Controller Attributes 00:18:20.155 128-bit Host Identifier: Supported 00:18:20.155 Non-Operational Permissive Mode: Not Supported 00:18:20.155 NVM Sets: Not Supported 00:18:20.155 Read Recovery Levels: Not Supported 00:18:20.155 Endurance Groups: Not Supported 00:18:20.155 Predictable Latency Mode: Not Supported 00:18:20.155 Traffic Based Keep ALive: Not Supported 00:18:20.155 Namespace Granularity: Not Supported 00:18:20.155 SQ Associations: Not Supported 00:18:20.155 UUID List: Not Supported 00:18:20.155 Multi-Domain Subsystem: Not Supported 00:18:20.155 Fixed Capacity Management: Not Supported 00:18:20.155 Variable Capacity Management: Not Supported 00:18:20.155 Delete Endurance Group: Not Supported 00:18:20.155 Delete NVM Set: Not Supported 00:18:20.155 Extended LBA Formats Supported: Not Supported 00:18:20.155 Flexible Data Placement Supported: Not Supported 00:18:20.155 00:18:20.155 Controller Memory Buffer Support 00:18:20.155 ================================ 00:18:20.155 Supported: No 00:18:20.155 00:18:20.155 Persistent Memory Region Support 00:18:20.155 ================================ 00:18:20.155 Supported: No 00:18:20.155 00:18:20.155 Admin Command Set Attributes 00:18:20.155 ============================ 00:18:20.155 Security Send/Receive: Not Supported 00:18:20.155 Format NVM: Not Supported 00:18:20.155 Firmware Activate/Download: Not Supported 00:18:20.155 Namespace Management: Not Supported 00:18:20.155 Device Self-Test: Not Supported 00:18:20.155 Directives: Not Supported 00:18:20.155 NVMe-MI: Not Supported 00:18:20.155 Virtualization Management: Not Supported 00:18:20.155 Doorbell Buffer Config: Not Supported 00:18:20.155 Get LBA Status Capability: Not Supported 00:18:20.155 Command & Feature Lockdown Capability: Not Supported 00:18:20.155 Abort Command Limit: 4 00:18:20.155 Async Event Request Limit: 4 00:18:20.155 Number of Firmware Slots: N/A 00:18:20.155 Firmware Slot 1 Read-Only: N/A 00:18:20.155 Firmware Activation Without Reset: N/A 00:18:20.155 Multiple Update Detection Support: N/A 00:18:20.155 Firmware Update Granularity: No Information Provided 00:18:20.155 Per-Namespace SMART Log: No 00:18:20.155 Asymmetric Namespace Access Log Page: Not Supported 00:18:20.155 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:20.155 Command Effects Log Page: Supported 00:18:20.155 Get Log Page Extended Data: Supported 00:18:20.155 Telemetry Log Pages: Not Supported 00:18:20.155 Persistent Event Log Pages: Not Supported 00:18:20.155 Supported Log Pages Log Page: May Support 00:18:20.155 Commands Supported & Effects Log Page: Not Supported 00:18:20.155 Feature Identifiers & Effects Log Page:May Support 00:18:20.155 NVMe-MI Commands & Effects Log Page: May Support 00:18:20.155 Data Area 4 for Telemetry Log: Not Supported 00:18:20.155 Error Log Page Entries Supported: 128 00:18:20.155 Keep Alive: Supported 00:18:20.155 Keep Alive Granularity: 10000 ms 00:18:20.155 00:18:20.155 NVM Command Set Attributes 00:18:20.155 ========================== 00:18:20.155 Submission Queue Entry Size 00:18:20.155 Max: 64 00:18:20.155 Min: 64 00:18:20.155 Completion Queue Entry Size 00:18:20.155 Max: 16 00:18:20.155 Min: 16 00:18:20.155 Number of Namespaces: 32 00:18:20.155 Compare Command: Supported 00:18:20.155 Write Uncorrectable Command: Not Supported 00:18:20.155 Dataset Management Command: Supported 00:18:20.155 Write Zeroes Command: Supported 00:18:20.155 Set Features Save Field: Not Supported 00:18:20.155 Reservations: Not Supported 00:18:20.155 Timestamp: Not Supported 00:18:20.155 Copy: Supported 00:18:20.155 Volatile Write Cache: Present 00:18:20.155 Atomic Write Unit (Normal): 1 00:18:20.155 Atomic Write Unit (PFail): 1 00:18:20.155 Atomic Compare & Write Unit: 1 00:18:20.155 Fused Compare & Write: Supported 00:18:20.155 Scatter-Gather List 00:18:20.155 SGL Command Set: Supported (Dword aligned) 00:18:20.155 SGL Keyed: Not Supported 00:18:20.155 SGL Bit Bucket Descriptor: Not Supported 00:18:20.155 SGL Metadata Pointer: Not Supported 00:18:20.155 Oversized SGL: Not Supported 00:18:20.155 SGL Metadata Address: Not Supported 00:18:20.155 SGL Offset: Not Supported 00:18:20.155 Transport SGL Data Block: Not Supported 00:18:20.155 Replay Protected Memory Block: Not Supported 00:18:20.155 00:18:20.155 Firmware Slot Information 00:18:20.155 ========================= 00:18:20.155 Active slot: 1 00:18:20.155 Slot 1 Firmware Revision: 25.01 00:18:20.155 00:18:20.155 00:18:20.156 Commands Supported and Effects 00:18:20.156 ============================== 00:18:20.156 Admin Commands 00:18:20.156 -------------- 00:18:20.156 Get Log Page (02h): Supported 00:18:20.156 Identify (06h): Supported 00:18:20.156 Abort (08h): Supported 00:18:20.156 Set Features (09h): Supported 00:18:20.156 Get Features (0Ah): Supported 00:18:20.156 Asynchronous Event Request (0Ch): Supported 00:18:20.156 Keep Alive (18h): Supported 00:18:20.156 I/O Commands 00:18:20.156 ------------ 00:18:20.156 Flush (00h): Supported LBA-Change 00:18:20.156 Write (01h): Supported LBA-Change 00:18:20.156 Read (02h): Supported 00:18:20.156 Compare (05h): Supported 00:18:20.156 Write Zeroes (08h): Supported LBA-Change 00:18:20.156 Dataset Management (09h): Supported LBA-Change 00:18:20.156 Copy (19h): Supported LBA-Change 00:18:20.156 00:18:20.156 Error Log 00:18:20.156 ========= 00:18:20.156 00:18:20.156 Arbitration 00:18:20.156 =========== 00:18:20.156 Arbitration Burst: 1 00:18:20.156 00:18:20.156 Power Management 00:18:20.156 ================ 00:18:20.156 Number of Power States: 1 00:18:20.156 Current Power State: Power State #0 00:18:20.156 Power State #0: 00:18:20.156 Max Power: 0.00 W 00:18:20.156 Non-Operational State: Operational 00:18:20.156 Entry Latency: Not Reported 00:18:20.156 Exit Latency: Not Reported 00:18:20.156 Relative Read Throughput: 0 00:18:20.156 Relative Read Latency: 0 00:18:20.156 Relative Write Throughput: 0 00:18:20.156 Relative Write Latency: 0 00:18:20.156 Idle Power: Not Reported 00:18:20.156 Active Power: Not Reported 00:18:20.156 Non-Operational Permissive Mode: Not Supported 00:18:20.156 00:18:20.156 Health Information 00:18:20.156 ================== 00:18:20.156 Critical Warnings: 00:18:20.156 Available Spare Space: OK 00:18:20.156 Temperature: OK 00:18:20.156 Device Reliability: OK 00:18:20.156 Read Only: No 00:18:20.156 Volatile Memory Backup: OK 00:18:20.156 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:20.156 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:20.156 Available Spare: 0% 00:18:20.156 Available Sp[2024-12-13 06:24:11.573739] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:20.156 [2024-12-13 06:24:11.573748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:20.156 [2024-12-13 06:24:11.573770] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:20.156 [2024-12-13 06:24:11.573778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.156 [2024-12-13 06:24:11.573784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.156 [2024-12-13 06:24:11.573790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.156 [2024-12-13 06:24:11.573795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:20.156 [2024-12-13 06:24:11.575455] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:20.156 [2024-12-13 06:24:11.575465] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:20.156 [2024-12-13 06:24:11.575936] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:20.156 [2024-12-13 06:24:11.575983] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:20.156 [2024-12-13 06:24:11.575989] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:20.156 [2024-12-13 06:24:11.576943] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:20.156 [2024-12-13 06:24:11.576952] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:20.156 [2024-12-13 06:24:11.577001] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:20.156 [2024-12-13 06:24:11.577975] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:20.156 are Threshold: 0% 00:18:20.156 Life Percentage Used: 0% 00:18:20.156 Data Units Read: 0 00:18:20.156 Data Units Written: 0 00:18:20.156 Host Read Commands: 0 00:18:20.156 Host Write Commands: 0 00:18:20.156 Controller Busy Time: 0 minutes 00:18:20.156 Power Cycles: 0 00:18:20.156 Power On Hours: 0 hours 00:18:20.156 Unsafe Shutdowns: 0 00:18:20.156 Unrecoverable Media Errors: 0 00:18:20.156 Lifetime Error Log Entries: 0 00:18:20.156 Warning Temperature Time: 0 minutes 00:18:20.156 Critical Temperature Time: 0 minutes 00:18:20.156 00:18:20.156 Number of Queues 00:18:20.156 ================ 00:18:20.156 Number of I/O Submission Queues: 127 00:18:20.156 Number of I/O Completion Queues: 127 00:18:20.156 00:18:20.156 Active Namespaces 00:18:20.156 ================= 00:18:20.156 Namespace ID:1 00:18:20.156 Error Recovery Timeout: Unlimited 00:18:20.156 Command Set Identifier: NVM (00h) 00:18:20.156 Deallocate: Supported 00:18:20.156 Deallocated/Unwritten Error: Not Supported 00:18:20.156 Deallocated Read Value: Unknown 00:18:20.156 Deallocate in Write Zeroes: Not Supported 00:18:20.156 Deallocated Guard Field: 0xFFFF 00:18:20.156 Flush: Supported 00:18:20.156 Reservation: Supported 00:18:20.156 Namespace Sharing Capabilities: Multiple Controllers 00:18:20.156 Size (in LBAs): 131072 (0GiB) 00:18:20.156 Capacity (in LBAs): 131072 (0GiB) 00:18:20.156 Utilization (in LBAs): 131072 (0GiB) 00:18:20.156 NGUID: A08547C4272242E5B5BA6AFBD34FDD7B 00:18:20.156 UUID: a08547c4-2722-42e5-b5ba-6afbd34fdd7b 00:18:20.156 Thin Provisioning: Not Supported 00:18:20.156 Per-NS Atomic Units: Yes 00:18:20.156 Atomic Boundary Size (Normal): 0 00:18:20.156 Atomic Boundary Size (PFail): 0 00:18:20.156 Atomic Boundary Offset: 0 00:18:20.156 Maximum Single Source Range Length: 65535 00:18:20.156 Maximum Copy Length: 65535 00:18:20.156 Maximum Source Range Count: 1 00:18:20.156 NGUID/EUI64 Never Reused: No 00:18:20.156 Namespace Write Protected: No 00:18:20.156 Number of LBA Formats: 1 00:18:20.156 Current LBA Format: LBA Format #00 00:18:20.156 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:20.156 00:18:20.156 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:20.156 [2024-12-13 06:24:11.806237] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:25.417 Initializing NVMe Controllers 00:18:25.417 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:25.417 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:25.417 Initialization complete. Launching workers. 00:18:25.417 ======================================================== 00:18:25.417 Latency(us) 00:18:25.417 Device Information : IOPS MiB/s Average min max 00:18:25.417 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39890.85 155.82 3208.37 961.11 10607.31 00:18:25.417 ======================================================== 00:18:25.417 Total : 39890.85 155.82 3208.37 961.11 10607.31 00:18:25.417 00:18:25.417 [2024-12-13 06:24:16.823476] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:25.417 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:25.417 [2024-12-13 06:24:17.058516] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:30.677 Initializing NVMe Controllers 00:18:30.677 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:30.677 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:30.677 Initialization complete. Launching workers. 00:18:30.677 ======================================================== 00:18:30.677 Latency(us) 00:18:30.677 Device Information : IOPS MiB/s Average min max 00:18:30.677 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16021.00 62.58 7998.10 4988.62 15474.84 00:18:30.677 ======================================================== 00:18:30.677 Total : 16021.00 62.58 7998.10 4988.62 15474.84 00:18:30.677 00:18:30.677 [2024-12-13 06:24:22.095628] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:30.677 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:30.677 [2024-12-13 06:24:22.305636] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:35.940 [2024-12-13 06:24:27.387769] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:35.940 Initializing NVMe Controllers 00:18:35.940 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:35.940 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:35.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:35.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:35.940 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:35.940 Initialization complete. Launching workers. 00:18:35.940 Starting thread on core 2 00:18:35.940 Starting thread on core 3 00:18:35.940 Starting thread on core 1 00:18:35.940 06:24:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:36.252 [2024-12-13 06:24:27.678235] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.623 [2024-12-13 06:24:30.737675] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.623 Initializing NVMe Controllers 00:18:39.623 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.623 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.623 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:39.623 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:39.623 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:39.623 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:39.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:39.623 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:39.623 Initialization complete. Launching workers. 00:18:39.623 Starting thread on core 1 with urgent priority queue 00:18:39.623 Starting thread on core 2 with urgent priority queue 00:18:39.623 Starting thread on core 3 with urgent priority queue 00:18:39.623 Starting thread on core 0 with urgent priority queue 00:18:39.623 SPDK bdev Controller (SPDK1 ) core 0: 9001.67 IO/s 11.11 secs/100000 ios 00:18:39.623 SPDK bdev Controller (SPDK1 ) core 1: 8487.33 IO/s 11.78 secs/100000 ios 00:18:39.623 SPDK bdev Controller (SPDK1 ) core 2: 8037.33 IO/s 12.44 secs/100000 ios 00:18:39.623 SPDK bdev Controller (SPDK1 ) core 3: 8606.67 IO/s 11.62 secs/100000 ios 00:18:39.623 ======================================================== 00:18:39.623 00:18:39.623 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:39.623 [2024-12-13 06:24:31.026914] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.623 Initializing NVMe Controllers 00:18:39.623 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.623 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:39.623 Namespace ID: 1 size: 0GB 00:18:39.623 Initialization complete. 00:18:39.623 INFO: using host memory buffer for IO 00:18:39.623 Hello world! 00:18:39.623 [2024-12-13 06:24:31.061143] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.623 06:24:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:39.880 [2024-12-13 06:24:31.330071] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:40.811 Initializing NVMe Controllers 00:18:40.811 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.811 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:40.811 Initialization complete. Launching workers. 00:18:40.811 submit (in ns) avg, min, max = 7333.3, 3171.4, 3999299.0 00:18:40.811 complete (in ns) avg, min, max = 19176.6, 1717.1, 7986285.7 00:18:40.811 00:18:40.811 Submit histogram 00:18:40.811 ================ 00:18:40.811 Range in us Cumulative Count 00:18:40.811 3.170 - 3.185: 0.0123% ( 2) 00:18:40.811 3.185 - 3.200: 0.0430% ( 5) 00:18:40.811 3.200 - 3.215: 0.2947% ( 41) 00:18:40.811 3.215 - 3.230: 0.8042% ( 83) 00:18:40.811 3.230 - 3.246: 1.5040% ( 114) 00:18:40.811 3.246 - 3.261: 3.1430% ( 267) 00:18:40.811 3.261 - 3.276: 7.5015% ( 710) 00:18:40.811 3.276 - 3.291: 13.9042% ( 1043) 00:18:40.811 3.291 - 3.307: 20.1166% ( 1012) 00:18:40.811 3.307 - 3.322: 27.1025% ( 1138) 00:18:40.811 3.322 - 3.337: 34.0516% ( 1132) 00:18:40.811 3.337 - 3.352: 39.3063% ( 856) 00:18:40.811 3.352 - 3.368: 43.9779% ( 761) 00:18:40.811 3.368 - 3.383: 48.6495% ( 761) 00:18:40.811 3.383 - 3.398: 53.3824% ( 771) 00:18:40.811 3.398 - 3.413: 57.4770% ( 667) 00:18:40.811 3.413 - 3.429: 63.7569% ( 1023) 00:18:40.811 3.429 - 3.444: 70.3499% ( 1074) 00:18:40.811 3.444 - 3.459: 75.3407% ( 813) 00:18:40.811 3.459 - 3.474: 80.6446% ( 864) 00:18:40.811 3.474 - 3.490: 84.5795% ( 641) 00:18:40.811 3.490 - 3.505: 86.8631% ( 372) 00:18:40.811 3.505 - 3.520: 87.8392% ( 159) 00:18:40.811 3.520 - 3.535: 88.3180% ( 78) 00:18:40.811 3.535 - 3.550: 88.7477% ( 70) 00:18:40.811 3.550 - 3.566: 89.3247% ( 94) 00:18:40.811 3.566 - 3.581: 90.0000% ( 110) 00:18:40.811 3.581 - 3.596: 90.9331% ( 152) 00:18:40.811 3.596 - 3.611: 91.9521% ( 166) 00:18:40.811 3.611 - 3.627: 92.7993% ( 138) 00:18:40.811 3.627 - 3.642: 93.7078% ( 148) 00:18:40.811 3.642 - 3.657: 94.4076% ( 114) 00:18:40.811 3.657 - 3.672: 95.0890% ( 111) 00:18:40.811 3.672 - 3.688: 95.9546% ( 141) 00:18:40.811 3.688 - 3.703: 96.6789% ( 118) 00:18:40.811 3.703 - 3.718: 97.4524% ( 126) 00:18:40.811 3.718 - 3.733: 98.0847% ( 103) 00:18:40.811 3.733 - 3.749: 98.4530% ( 60) 00:18:40.811 3.749 - 3.764: 98.7723% ( 52) 00:18:40.811 3.764 - 3.779: 99.0853% ( 51) 00:18:40.811 3.779 - 3.794: 99.3002% ( 35) 00:18:40.811 3.794 - 3.810: 99.4107% ( 18) 00:18:40.811 3.810 - 3.825: 99.5273% ( 19) 00:18:40.811 3.825 - 3.840: 99.5703% ( 7) 00:18:40.811 3.840 - 3.855: 99.6255% ( 9) 00:18:40.811 3.855 - 3.870: 99.6501% ( 4) 00:18:40.811 3.870 - 3.886: 99.6624% ( 2) 00:18:40.811 3.901 - 3.931: 99.6685% ( 1) 00:18:40.811 4.754 - 4.785: 99.6746% ( 1) 00:18:40.811 4.876 - 4.907: 99.6808% ( 1) 00:18:40.811 4.968 - 4.998: 99.6869% ( 1) 00:18:40.811 5.029 - 5.059: 99.6992% ( 2) 00:18:40.811 5.150 - 5.181: 99.7053% ( 1) 00:18:40.811 5.181 - 5.211: 99.7115% ( 1) 00:18:40.811 5.211 - 5.242: 99.7176% ( 1) 00:18:40.811 5.333 - 5.364: 99.7238% ( 1) 00:18:40.811 5.364 - 5.394: 99.7299% ( 1) 00:18:40.811 5.394 - 5.425: 99.7360% ( 1) 00:18:40.811 5.455 - 5.486: 99.7483% ( 2) 00:18:40.811 5.486 - 5.516: 99.7545% ( 1) 00:18:40.811 5.547 - 5.577: 99.7606% ( 1) 00:18:40.811 5.638 - 5.669: 99.7667% ( 1) 00:18:40.811 5.669 - 5.699: 99.7729% ( 1) 00:18:40.811 5.730 - 5.760: 99.7790% ( 1) 00:18:40.811 5.882 - 5.912: 99.7851% ( 1) 00:18:40.811 6.004 - 6.034: 99.7974% ( 2) 00:18:40.811 6.156 - 6.187: 99.8036% ( 1) 00:18:40.811 6.187 - 6.217: 99.8097% ( 1) 00:18:40.811 6.217 - 6.248: 99.8158% ( 1) 00:18:40.811 6.248 - 6.278: 99.8220% ( 1) 00:18:40.811 6.400 - 6.430: 99.8281% ( 1) 00:18:40.811 6.430 - 6.461: 99.8343% ( 1) 00:18:40.811 6.491 - 6.522: 99.8404% ( 1) 00:18:40.811 6.552 - 6.583: 99.8465% ( 1) 00:18:40.811 6.583 - 6.613: 99.8527% ( 1) 00:18:40.811 6.796 - 6.827: 99.8588% ( 1) 00:18:40.811 6.857 - 6.888: 99.8649% ( 1) 00:18:40.811 7.101 - 7.131: 99.8711% ( 1) 00:18:40.811 7.223 - 7.253: 99.8772% ( 1) 00:18:40.811 [2024-12-13 06:24:32.349979] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:40.811 7.619 - 7.650: 99.8834% ( 1) 00:18:40.811 7.710 - 7.741: 99.8895% ( 1) 00:18:40.811 7.741 - 7.771: 99.8956% ( 1) 00:18:40.811 8.290 - 8.350: 99.9018% ( 1) 00:18:40.811 3994.575 - 4025.783: 100.0000% ( 16) 00:18:40.811 00:18:40.811 Complete histogram 00:18:40.811 ================== 00:18:40.811 Range in us Cumulative Count 00:18:40.811 1.714 - 1.722: 0.0123% ( 2) 00:18:40.811 1.722 - 1.730: 0.0368% ( 4) 00:18:40.812 1.730 - 1.737: 0.1719% ( 22) 00:18:40.812 1.737 - 1.745: 0.2824% ( 18) 00:18:40.812 1.745 - 1.752: 0.3376% ( 9) 00:18:40.812 1.752 - 1.760: 0.3806% ( 7) 00:18:40.812 1.760 - 1.768: 0.4236% ( 7) 00:18:40.812 1.768 - 1.775: 1.9767% ( 253) 00:18:40.812 1.775 - 1.783: 12.0074% ( 1634) 00:18:40.812 1.783 - 1.790: 34.0577% ( 3592) 00:18:40.812 1.790 - 1.798: 52.6151% ( 3023) 00:18:40.812 1.798 - 1.806: 61.1848% ( 1396) 00:18:40.812 1.806 - 1.813: 64.4690% ( 535) 00:18:40.812 1.813 - 1.821: 66.2615% ( 292) 00:18:40.812 1.821 - 1.829: 67.3358% ( 175) 00:18:40.812 1.829 - 1.836: 69.5089% ( 354) 00:18:40.812 1.836 - 1.844: 76.0651% ( 1068) 00:18:40.812 1.844 - 1.851: 84.4874% ( 1372) 00:18:40.812 1.851 - 1.859: 91.2216% ( 1097) 00:18:40.812 1.859 - 1.867: 94.6470% ( 558) 00:18:40.812 1.867 - 1.874: 96.5071% ( 303) 00:18:40.812 1.874 - 1.882: 97.4156% ( 148) 00:18:40.812 1.882 - 1.890: 97.9312% ( 84) 00:18:40.812 1.890 - 1.897: 98.1707% ( 39) 00:18:40.812 1.897 - 1.905: 98.4408% ( 44) 00:18:40.812 1.905 - 1.912: 98.6372% ( 32) 00:18:40.812 1.912 - 1.920: 98.8336% ( 32) 00:18:40.812 1.920 - 1.928: 99.0178% ( 30) 00:18:40.812 1.928 - 1.935: 99.1590% ( 23) 00:18:40.812 1.935 - 1.943: 99.2572% ( 16) 00:18:40.812 1.943 - 1.950: 99.3309% ( 12) 00:18:40.812 1.950 - 1.966: 99.3923% ( 10) 00:18:40.812 1.966 - 1.981: 99.4045% ( 2) 00:18:40.812 1.981 - 1.996: 99.4168% ( 2) 00:18:40.812 1.996 - 2.011: 99.4230% ( 1) 00:18:40.812 2.057 - 2.072: 99.4291% ( 1) 00:18:40.812 3.505 - 3.520: 99.4352% ( 1) 00:18:40.812 3.749 - 3.764: 99.4414% ( 1) 00:18:40.812 3.764 - 3.779: 99.4475% ( 1) 00:18:40.812 3.886 - 3.901: 99.4537% ( 1) 00:18:40.812 4.206 - 4.236: 99.4598% ( 1) 00:18:40.812 4.236 - 4.267: 99.4659% ( 1) 00:18:40.812 4.419 - 4.450: 99.4721% ( 1) 00:18:40.812 4.632 - 4.663: 99.4782% ( 1) 00:18:40.812 4.968 - 4.998: 99.4843% ( 1) 00:18:40.812 4.998 - 5.029: 99.4966% ( 2) 00:18:40.812 5.272 - 5.303: 99.5028% ( 1) 00:18:40.812 5.303 - 5.333: 99.5089% ( 1) 00:18:40.812 5.425 - 5.455: 99.5150% ( 1) 00:18:40.812 5.516 - 5.547: 99.5212% ( 1) 00:18:40.812 5.730 - 5.760: 99.5273% ( 1) 00:18:40.812 5.760 - 5.790: 99.5335% ( 1) 00:18:40.812 5.943 - 5.973: 99.5396% ( 1) 00:18:40.812 5.973 - 6.004: 99.5457% ( 1) 00:18:40.812 6.217 - 6.248: 99.5519% ( 1) 00:18:40.812 7.131 - 7.162: 99.5580% ( 1) 00:18:40.812 7.680 - 7.710: 99.5641% ( 1) 00:18:40.812 26.210 - 26.331: 99.5703% ( 1) 00:18:40.812 3183.177 - 3198.781: 99.5764% ( 1) 00:18:40.812 3978.971 - 3994.575: 99.6071% ( 5) 00:18:40.812 3994.575 - 4025.783: 99.9939% ( 63) 00:18:40.812 7957.943 - 7989.150: 100.0000% ( 1) 00:18:40.812 00:18:40.812 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:40.812 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:40.812 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:40.812 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:40.812 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:41.070 [ 00:18:41.070 { 00:18:41.070 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.070 "subtype": "Discovery", 00:18:41.070 "listen_addresses": [], 00:18:41.070 "allow_any_host": true, 00:18:41.070 "hosts": [] 00:18:41.070 }, 00:18:41.070 { 00:18:41.070 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:41.070 "subtype": "NVMe", 00:18:41.070 "listen_addresses": [ 00:18:41.070 { 00:18:41.070 "trtype": "VFIOUSER", 00:18:41.070 "adrfam": "IPv4", 00:18:41.070 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:41.070 "trsvcid": "0" 00:18:41.070 } 00:18:41.070 ], 00:18:41.070 "allow_any_host": true, 00:18:41.070 "hosts": [], 00:18:41.070 "serial_number": "SPDK1", 00:18:41.070 "model_number": "SPDK bdev Controller", 00:18:41.070 "max_namespaces": 32, 00:18:41.070 "min_cntlid": 1, 00:18:41.070 "max_cntlid": 65519, 00:18:41.070 "namespaces": [ 00:18:41.070 { 00:18:41.070 "nsid": 1, 00:18:41.070 "bdev_name": "Malloc1", 00:18:41.070 "name": "Malloc1", 00:18:41.070 "nguid": "A08547C4272242E5B5BA6AFBD34FDD7B", 00:18:41.070 "uuid": "a08547c4-2722-42e5-b5ba-6afbd34fdd7b" 00:18:41.070 } 00:18:41.070 ] 00:18:41.070 }, 00:18:41.070 { 00:18:41.070 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:41.070 "subtype": "NVMe", 00:18:41.070 "listen_addresses": [ 00:18:41.070 { 00:18:41.070 "trtype": "VFIOUSER", 00:18:41.070 "adrfam": "IPv4", 00:18:41.070 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:41.070 "trsvcid": "0" 00:18:41.070 } 00:18:41.070 ], 00:18:41.070 "allow_any_host": true, 00:18:41.070 "hosts": [], 00:18:41.070 "serial_number": "SPDK2", 00:18:41.070 "model_number": "SPDK bdev Controller", 00:18:41.070 "max_namespaces": 32, 00:18:41.070 "min_cntlid": 1, 00:18:41.070 "max_cntlid": 65519, 00:18:41.070 "namespaces": [ 00:18:41.070 { 00:18:41.070 "nsid": 1, 00:18:41.070 "bdev_name": "Malloc2", 00:18:41.070 "name": "Malloc2", 00:18:41.070 "nguid": "39A3AA4361DE407B88BDE34177D48F4F", 00:18:41.070 "uuid": "39a3aa43-61de-407b-88bd-e34177d48f4f" 00:18:41.070 } 00:18:41.070 ] 00:18:41.070 } 00:18:41.070 ] 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=968707 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:41.070 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:41.328 [2024-12-13 06:24:32.734836] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:41.328 Malloc3 00:18:41.328 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:41.328 [2024-12-13 06:24:32.961494] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:41.586 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:41.586 Asynchronous Event Request test 00:18:41.586 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:41.586 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:41.586 Registering asynchronous event callbacks... 00:18:41.586 Starting namespace attribute notice tests for all controllers... 00:18:41.586 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:41.586 aer_cb - Changed Namespace 00:18:41.586 Cleaning up... 00:18:41.586 [ 00:18:41.586 { 00:18:41.586 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:41.586 "subtype": "Discovery", 00:18:41.586 "listen_addresses": [], 00:18:41.586 "allow_any_host": true, 00:18:41.586 "hosts": [] 00:18:41.586 }, 00:18:41.586 { 00:18:41.586 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:41.586 "subtype": "NVMe", 00:18:41.586 "listen_addresses": [ 00:18:41.586 { 00:18:41.586 "trtype": "VFIOUSER", 00:18:41.586 "adrfam": "IPv4", 00:18:41.586 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:41.586 "trsvcid": "0" 00:18:41.586 } 00:18:41.586 ], 00:18:41.586 "allow_any_host": true, 00:18:41.586 "hosts": [], 00:18:41.586 "serial_number": "SPDK1", 00:18:41.586 "model_number": "SPDK bdev Controller", 00:18:41.586 "max_namespaces": 32, 00:18:41.586 "min_cntlid": 1, 00:18:41.586 "max_cntlid": 65519, 00:18:41.586 "namespaces": [ 00:18:41.586 { 00:18:41.586 "nsid": 1, 00:18:41.586 "bdev_name": "Malloc1", 00:18:41.586 "name": "Malloc1", 00:18:41.586 "nguid": "A08547C4272242E5B5BA6AFBD34FDD7B", 00:18:41.586 "uuid": "a08547c4-2722-42e5-b5ba-6afbd34fdd7b" 00:18:41.586 }, 00:18:41.586 { 00:18:41.586 "nsid": 2, 00:18:41.586 "bdev_name": "Malloc3", 00:18:41.586 "name": "Malloc3", 00:18:41.586 "nguid": "E97A495A8A9C4F12B034145B171A31CF", 00:18:41.586 "uuid": "e97a495a-8a9c-4f12-b034-145b171a31cf" 00:18:41.586 } 00:18:41.586 ] 00:18:41.586 }, 00:18:41.586 { 00:18:41.586 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:41.586 "subtype": "NVMe", 00:18:41.586 "listen_addresses": [ 00:18:41.586 { 00:18:41.586 "trtype": "VFIOUSER", 00:18:41.586 "adrfam": "IPv4", 00:18:41.586 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:41.586 "trsvcid": "0" 00:18:41.586 } 00:18:41.586 ], 00:18:41.586 "allow_any_host": true, 00:18:41.586 "hosts": [], 00:18:41.586 "serial_number": "SPDK2", 00:18:41.586 "model_number": "SPDK bdev Controller", 00:18:41.586 "max_namespaces": 32, 00:18:41.586 "min_cntlid": 1, 00:18:41.586 "max_cntlid": 65519, 00:18:41.586 "namespaces": [ 00:18:41.586 { 00:18:41.586 "nsid": 1, 00:18:41.586 "bdev_name": "Malloc2", 00:18:41.586 "name": "Malloc2", 00:18:41.586 "nguid": "39A3AA4361DE407B88BDE34177D48F4F", 00:18:41.586 "uuid": "39a3aa43-61de-407b-88bd-e34177d48f4f" 00:18:41.586 } 00:18:41.586 ] 00:18:41.586 } 00:18:41.586 ] 00:18:41.586 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 968707 00:18:41.586 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:41.586 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:41.586 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:41.586 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:41.586 [2024-12-13 06:24:33.215532] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:18:41.586 [2024-12-13 06:24:33.215560] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid968863 ] 00:18:41.846 [2024-12-13 06:24:33.253817] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:41.846 [2024-12-13 06:24:33.259066] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:41.846 [2024-12-13 06:24:33.259086] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fb9752e2000 00:18:41.846 [2024-12-13 06:24:33.260061] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.261073] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.262078] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.263091] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.264097] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.265105] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.266119] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.267127] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:41.846 [2024-12-13 06:24:33.268134] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:41.846 [2024-12-13 06:24:33.268144] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fb973fec000 00:18:41.846 [2024-12-13 06:24:33.269059] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:41.846 [2024-12-13 06:24:33.278426] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:41.846 [2024-12-13 06:24:33.278455] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:41.846 [2024-12-13 06:24:33.283525] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:41.846 [2024-12-13 06:24:33.283559] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:41.846 [2024-12-13 06:24:33.283632] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:41.846 [2024-12-13 06:24:33.283646] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:41.846 [2024-12-13 06:24:33.283651] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:41.846 [2024-12-13 06:24:33.284525] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:41.846 [2024-12-13 06:24:33.284535] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:41.846 [2024-12-13 06:24:33.284541] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:41.846 [2024-12-13 06:24:33.285530] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:41.846 [2024-12-13 06:24:33.285539] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:41.846 [2024-12-13 06:24:33.285545] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.286549] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:41.846 [2024-12-13 06:24:33.286558] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.287556] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:41.846 [2024-12-13 06:24:33.287567] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:41.846 [2024-12-13 06:24:33.287572] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.287577] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.287684] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:41.846 [2024-12-13 06:24:33.287688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.287693] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:41.846 [2024-12-13 06:24:33.288569] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:41.846 [2024-12-13 06:24:33.289573] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:41.846 [2024-12-13 06:24:33.290581] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:41.846 [2024-12-13 06:24:33.291583] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:41.846 [2024-12-13 06:24:33.291622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:41.846 [2024-12-13 06:24:33.292593] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:41.846 [2024-12-13 06:24:33.292601] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:41.846 [2024-12-13 06:24:33.292606] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.292624] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:41.847 [2024-12-13 06:24:33.292631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.292640] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.847 [2024-12-13 06:24:33.292645] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.847 [2024-12-13 06:24:33.292648] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.847 [2024-12-13 06:24:33.292658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.301456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.301467] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:41.847 [2024-12-13 06:24:33.301472] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:41.847 [2024-12-13 06:24:33.301476] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:41.847 [2024-12-13 06:24:33.301480] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:41.847 [2024-12-13 06:24:33.301487] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:41.847 [2024-12-13 06:24:33.301491] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:41.847 [2024-12-13 06:24:33.301495] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.301504] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.301516] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.309453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.309465] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.847 [2024-12-13 06:24:33.309473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.847 [2024-12-13 06:24:33.309480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.847 [2024-12-13 06:24:33.309488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:41.847 [2024-12-13 06:24:33.309492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.309500] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.309508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.317454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.317461] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:41.847 [2024-12-13 06:24:33.317465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.317471] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.317476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.317484] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.325452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.325502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.325512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.325518] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:41.847 [2024-12-13 06:24:33.325522] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:41.847 [2024-12-13 06:24:33.325529] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.847 [2024-12-13 06:24:33.325534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.333455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.333465] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:41.847 [2024-12-13 06:24:33.333476] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.333482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.333488] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.847 [2024-12-13 06:24:33.333492] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.847 [2024-12-13 06:24:33.333496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.847 [2024-12-13 06:24:33.333501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.341453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.341467] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.341475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.341481] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:41.847 [2024-12-13 06:24:33.341485] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.847 [2024-12-13 06:24:33.341488] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.847 [2024-12-13 06:24:33.341494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.349453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.349464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349470] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349484] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349488] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349493] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349497] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:41.847 [2024-12-13 06:24:33.349502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:41.847 [2024-12-13 06:24:33.349508] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:41.847 [2024-12-13 06:24:33.349524] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.357455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.357469] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.365454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.365466] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.373453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.373465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.381453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:41.847 [2024-12-13 06:24:33.381467] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:41.847 [2024-12-13 06:24:33.381472] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:41.847 [2024-12-13 06:24:33.381475] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:41.847 [2024-12-13 06:24:33.381478] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:41.847 [2024-12-13 06:24:33.381481] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:41.847 [2024-12-13 06:24:33.381487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:41.847 [2024-12-13 06:24:33.381493] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:41.847 [2024-12-13 06:24:33.381497] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:41.847 [2024-12-13 06:24:33.381500] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.847 [2024-12-13 06:24:33.381505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:41.847 [2024-12-13 06:24:33.381511] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:41.847 [2024-12-13 06:24:33.381515] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:41.847 [2024-12-13 06:24:33.381518] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.848 [2024-12-13 06:24:33.381523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:41.848 [2024-12-13 06:24:33.381529] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:41.848 [2024-12-13 06:24:33.381533] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:41.848 [2024-12-13 06:24:33.381536] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:41.848 [2024-12-13 06:24:33.381541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:41.848 [2024-12-13 06:24:33.389454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:41.848 [2024-12-13 06:24:33.389473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:41.848 [2024-12-13 06:24:33.389482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:41.848 [2024-12-13 06:24:33.389488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:41.848 ===================================================== 00:18:41.848 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:41.848 ===================================================== 00:18:41.848 Controller Capabilities/Features 00:18:41.848 ================================ 00:18:41.848 Vendor ID: 4e58 00:18:41.848 Subsystem Vendor ID: 4e58 00:18:41.848 Serial Number: SPDK2 00:18:41.848 Model Number: SPDK bdev Controller 00:18:41.848 Firmware Version: 25.01 00:18:41.848 Recommended Arb Burst: 6 00:18:41.848 IEEE OUI Identifier: 8d 6b 50 00:18:41.848 Multi-path I/O 00:18:41.848 May have multiple subsystem ports: Yes 00:18:41.848 May have multiple controllers: Yes 00:18:41.848 Associated with SR-IOV VF: No 00:18:41.848 Max Data Transfer Size: 131072 00:18:41.848 Max Number of Namespaces: 32 00:18:41.848 Max Number of I/O Queues: 127 00:18:41.848 NVMe Specification Version (VS): 1.3 00:18:41.848 NVMe Specification Version (Identify): 1.3 00:18:41.848 Maximum Queue Entries: 256 00:18:41.848 Contiguous Queues Required: Yes 00:18:41.848 Arbitration Mechanisms Supported 00:18:41.848 Weighted Round Robin: Not Supported 00:18:41.848 Vendor Specific: Not Supported 00:18:41.848 Reset Timeout: 15000 ms 00:18:41.848 Doorbell Stride: 4 bytes 00:18:41.848 NVM Subsystem Reset: Not Supported 00:18:41.848 Command Sets Supported 00:18:41.848 NVM Command Set: Supported 00:18:41.848 Boot Partition: Not Supported 00:18:41.848 Memory Page Size Minimum: 4096 bytes 00:18:41.848 Memory Page Size Maximum: 4096 bytes 00:18:41.848 Persistent Memory Region: Not Supported 00:18:41.848 Optional Asynchronous Events Supported 00:18:41.848 Namespace Attribute Notices: Supported 00:18:41.848 Firmware Activation Notices: Not Supported 00:18:41.848 ANA Change Notices: Not Supported 00:18:41.848 PLE Aggregate Log Change Notices: Not Supported 00:18:41.848 LBA Status Info Alert Notices: Not Supported 00:18:41.848 EGE Aggregate Log Change Notices: Not Supported 00:18:41.848 Normal NVM Subsystem Shutdown event: Not Supported 00:18:41.848 Zone Descriptor Change Notices: Not Supported 00:18:41.848 Discovery Log Change Notices: Not Supported 00:18:41.848 Controller Attributes 00:18:41.848 128-bit Host Identifier: Supported 00:18:41.848 Non-Operational Permissive Mode: Not Supported 00:18:41.848 NVM Sets: Not Supported 00:18:41.848 Read Recovery Levels: Not Supported 00:18:41.848 Endurance Groups: Not Supported 00:18:41.848 Predictable Latency Mode: Not Supported 00:18:41.848 Traffic Based Keep ALive: Not Supported 00:18:41.848 Namespace Granularity: Not Supported 00:18:41.848 SQ Associations: Not Supported 00:18:41.848 UUID List: Not Supported 00:18:41.848 Multi-Domain Subsystem: Not Supported 00:18:41.848 Fixed Capacity Management: Not Supported 00:18:41.848 Variable Capacity Management: Not Supported 00:18:41.848 Delete Endurance Group: Not Supported 00:18:41.848 Delete NVM Set: Not Supported 00:18:41.848 Extended LBA Formats Supported: Not Supported 00:18:41.848 Flexible Data Placement Supported: Not Supported 00:18:41.848 00:18:41.848 Controller Memory Buffer Support 00:18:41.848 ================================ 00:18:41.848 Supported: No 00:18:41.848 00:18:41.848 Persistent Memory Region Support 00:18:41.848 ================================ 00:18:41.848 Supported: No 00:18:41.848 00:18:41.848 Admin Command Set Attributes 00:18:41.848 ============================ 00:18:41.848 Security Send/Receive: Not Supported 00:18:41.848 Format NVM: Not Supported 00:18:41.848 Firmware Activate/Download: Not Supported 00:18:41.848 Namespace Management: Not Supported 00:18:41.848 Device Self-Test: Not Supported 00:18:41.848 Directives: Not Supported 00:18:41.848 NVMe-MI: Not Supported 00:18:41.848 Virtualization Management: Not Supported 00:18:41.848 Doorbell Buffer Config: Not Supported 00:18:41.848 Get LBA Status Capability: Not Supported 00:18:41.848 Command & Feature Lockdown Capability: Not Supported 00:18:41.848 Abort Command Limit: 4 00:18:41.848 Async Event Request Limit: 4 00:18:41.848 Number of Firmware Slots: N/A 00:18:41.848 Firmware Slot 1 Read-Only: N/A 00:18:41.848 Firmware Activation Without Reset: N/A 00:18:41.848 Multiple Update Detection Support: N/A 00:18:41.848 Firmware Update Granularity: No Information Provided 00:18:41.848 Per-Namespace SMART Log: No 00:18:41.848 Asymmetric Namespace Access Log Page: Not Supported 00:18:41.848 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:41.848 Command Effects Log Page: Supported 00:18:41.848 Get Log Page Extended Data: Supported 00:18:41.848 Telemetry Log Pages: Not Supported 00:18:41.848 Persistent Event Log Pages: Not Supported 00:18:41.848 Supported Log Pages Log Page: May Support 00:18:41.848 Commands Supported & Effects Log Page: Not Supported 00:18:41.848 Feature Identifiers & Effects Log Page:May Support 00:18:41.848 NVMe-MI Commands & Effects Log Page: May Support 00:18:41.848 Data Area 4 for Telemetry Log: Not Supported 00:18:41.848 Error Log Page Entries Supported: 128 00:18:41.848 Keep Alive: Supported 00:18:41.848 Keep Alive Granularity: 10000 ms 00:18:41.848 00:18:41.848 NVM Command Set Attributes 00:18:41.848 ========================== 00:18:41.848 Submission Queue Entry Size 00:18:41.848 Max: 64 00:18:41.848 Min: 64 00:18:41.848 Completion Queue Entry Size 00:18:41.848 Max: 16 00:18:41.848 Min: 16 00:18:41.848 Number of Namespaces: 32 00:18:41.848 Compare Command: Supported 00:18:41.848 Write Uncorrectable Command: Not Supported 00:18:41.848 Dataset Management Command: Supported 00:18:41.848 Write Zeroes Command: Supported 00:18:41.848 Set Features Save Field: Not Supported 00:18:41.848 Reservations: Not Supported 00:18:41.848 Timestamp: Not Supported 00:18:41.848 Copy: Supported 00:18:41.848 Volatile Write Cache: Present 00:18:41.848 Atomic Write Unit (Normal): 1 00:18:41.848 Atomic Write Unit (PFail): 1 00:18:41.848 Atomic Compare & Write Unit: 1 00:18:41.848 Fused Compare & Write: Supported 00:18:41.848 Scatter-Gather List 00:18:41.848 SGL Command Set: Supported (Dword aligned) 00:18:41.848 SGL Keyed: Not Supported 00:18:41.848 SGL Bit Bucket Descriptor: Not Supported 00:18:41.848 SGL Metadata Pointer: Not Supported 00:18:41.848 Oversized SGL: Not Supported 00:18:41.848 SGL Metadata Address: Not Supported 00:18:41.848 SGL Offset: Not Supported 00:18:41.848 Transport SGL Data Block: Not Supported 00:18:41.848 Replay Protected Memory Block: Not Supported 00:18:41.848 00:18:41.848 Firmware Slot Information 00:18:41.848 ========================= 00:18:41.848 Active slot: 1 00:18:41.848 Slot 1 Firmware Revision: 25.01 00:18:41.848 00:18:41.848 00:18:41.848 Commands Supported and Effects 00:18:41.848 ============================== 00:18:41.848 Admin Commands 00:18:41.848 -------------- 00:18:41.848 Get Log Page (02h): Supported 00:18:41.848 Identify (06h): Supported 00:18:41.848 Abort (08h): Supported 00:18:41.848 Set Features (09h): Supported 00:18:41.848 Get Features (0Ah): Supported 00:18:41.848 Asynchronous Event Request (0Ch): Supported 00:18:41.848 Keep Alive (18h): Supported 00:18:41.848 I/O Commands 00:18:41.848 ------------ 00:18:41.848 Flush (00h): Supported LBA-Change 00:18:41.848 Write (01h): Supported LBA-Change 00:18:41.848 Read (02h): Supported 00:18:41.848 Compare (05h): Supported 00:18:41.848 Write Zeroes (08h): Supported LBA-Change 00:18:41.848 Dataset Management (09h): Supported LBA-Change 00:18:41.848 Copy (19h): Supported LBA-Change 00:18:41.848 00:18:41.848 Error Log 00:18:41.848 ========= 00:18:41.848 00:18:41.848 Arbitration 00:18:41.848 =========== 00:18:41.848 Arbitration Burst: 1 00:18:41.848 00:18:41.848 Power Management 00:18:41.848 ================ 00:18:41.848 Number of Power States: 1 00:18:41.848 Current Power State: Power State #0 00:18:41.848 Power State #0: 00:18:41.848 Max Power: 0.00 W 00:18:41.848 Non-Operational State: Operational 00:18:41.848 Entry Latency: Not Reported 00:18:41.848 Exit Latency: Not Reported 00:18:41.848 Relative Read Throughput: 0 00:18:41.848 Relative Read Latency: 0 00:18:41.849 Relative Write Throughput: 0 00:18:41.849 Relative Write Latency: 0 00:18:41.849 Idle Power: Not Reported 00:18:41.849 Active Power: Not Reported 00:18:41.849 Non-Operational Permissive Mode: Not Supported 00:18:41.849 00:18:41.849 Health Information 00:18:41.849 ================== 00:18:41.849 Critical Warnings: 00:18:41.849 Available Spare Space: OK 00:18:41.849 Temperature: OK 00:18:41.849 Device Reliability: OK 00:18:41.849 Read Only: No 00:18:41.849 Volatile Memory Backup: OK 00:18:41.849 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:41.849 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:41.849 Available Spare: 0% 00:18:41.849 Available Sp[2024-12-13 06:24:33.389570] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:41.849 [2024-12-13 06:24:33.397452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:41.849 [2024-12-13 06:24:33.397479] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:41.849 [2024-12-13 06:24:33.397487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.849 [2024-12-13 06:24:33.397493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.849 [2024-12-13 06:24:33.397499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.849 [2024-12-13 06:24:33.397504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:41.849 [2024-12-13 06:24:33.397554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:41.849 [2024-12-13 06:24:33.397565] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:41.849 [2024-12-13 06:24:33.398556] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:41.849 [2024-12-13 06:24:33.398598] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:41.849 [2024-12-13 06:24:33.398604] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:41.849 [2024-12-13 06:24:33.399558] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:41.849 [2024-12-13 06:24:33.399568] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:41.849 [2024-12-13 06:24:33.399616] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:41.849 [2024-12-13 06:24:33.400576] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:41.849 are Threshold: 0% 00:18:41.849 Life Percentage Used: 0% 00:18:41.849 Data Units Read: 0 00:18:41.849 Data Units Written: 0 00:18:41.849 Host Read Commands: 0 00:18:41.849 Host Write Commands: 0 00:18:41.849 Controller Busy Time: 0 minutes 00:18:41.849 Power Cycles: 0 00:18:41.849 Power On Hours: 0 hours 00:18:41.849 Unsafe Shutdowns: 0 00:18:41.849 Unrecoverable Media Errors: 0 00:18:41.849 Lifetime Error Log Entries: 0 00:18:41.849 Warning Temperature Time: 0 minutes 00:18:41.849 Critical Temperature Time: 0 minutes 00:18:41.849 00:18:41.849 Number of Queues 00:18:41.849 ================ 00:18:41.849 Number of I/O Submission Queues: 127 00:18:41.849 Number of I/O Completion Queues: 127 00:18:41.849 00:18:41.849 Active Namespaces 00:18:41.849 ================= 00:18:41.849 Namespace ID:1 00:18:41.849 Error Recovery Timeout: Unlimited 00:18:41.849 Command Set Identifier: NVM (00h) 00:18:41.849 Deallocate: Supported 00:18:41.849 Deallocated/Unwritten Error: Not Supported 00:18:41.849 Deallocated Read Value: Unknown 00:18:41.849 Deallocate in Write Zeroes: Not Supported 00:18:41.849 Deallocated Guard Field: 0xFFFF 00:18:41.849 Flush: Supported 00:18:41.849 Reservation: Supported 00:18:41.849 Namespace Sharing Capabilities: Multiple Controllers 00:18:41.849 Size (in LBAs): 131072 (0GiB) 00:18:41.849 Capacity (in LBAs): 131072 (0GiB) 00:18:41.849 Utilization (in LBAs): 131072 (0GiB) 00:18:41.849 NGUID: 39A3AA4361DE407B88BDE34177D48F4F 00:18:41.849 UUID: 39a3aa43-61de-407b-88bd-e34177d48f4f 00:18:41.849 Thin Provisioning: Not Supported 00:18:41.849 Per-NS Atomic Units: Yes 00:18:41.849 Atomic Boundary Size (Normal): 0 00:18:41.849 Atomic Boundary Size (PFail): 0 00:18:41.849 Atomic Boundary Offset: 0 00:18:41.849 Maximum Single Source Range Length: 65535 00:18:41.849 Maximum Copy Length: 65535 00:18:41.849 Maximum Source Range Count: 1 00:18:41.849 NGUID/EUI64 Never Reused: No 00:18:41.849 Namespace Write Protected: No 00:18:41.849 Number of LBA Formats: 1 00:18:41.849 Current LBA Format: LBA Format #00 00:18:41.849 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:41.849 00:18:41.849 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:42.107 [2024-12-13 06:24:33.629810] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:47.368 Initializing NVMe Controllers 00:18:47.368 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:47.368 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:47.368 Initialization complete. Launching workers. 00:18:47.368 ======================================================== 00:18:47.368 Latency(us) 00:18:47.368 Device Information : IOPS MiB/s Average min max 00:18:47.368 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39959.78 156.09 3204.78 967.69 8149.24 00:18:47.368 ======================================================== 00:18:47.368 Total : 39959.78 156.09 3204.78 967.69 8149.24 00:18:47.368 00:18:47.368 [2024-12-13 06:24:38.736719] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:47.368 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:47.368 [2024-12-13 06:24:38.972364] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:52.627 Initializing NVMe Controllers 00:18:52.627 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:52.627 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:18:52.627 Initialization complete. Launching workers. 00:18:52.627 ======================================================== 00:18:52.627 Latency(us) 00:18:52.627 Device Information : IOPS MiB/s Average min max 00:18:52.627 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39919.36 155.94 3206.07 965.83 9592.16 00:18:52.627 ======================================================== 00:18:52.627 Total : 39919.36 155.94 3206.07 965.83 9592.16 00:18:52.627 00:18:52.627 [2024-12-13 06:24:43.991790] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:52.627 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:52.627 [2024-12-13 06:24:44.195057] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:57.897 [2024-12-13 06:24:49.323540] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:57.897 Initializing NVMe Controllers 00:18:57.897 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:57.897 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:57.897 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:18:57.897 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:18:57.897 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:18:57.897 Initialization complete. Launching workers. 00:18:57.897 Starting thread on core 2 00:18:57.897 Starting thread on core 3 00:18:57.897 Starting thread on core 1 00:18:57.897 06:24:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:18:58.155 [2024-12-13 06:24:49.612754] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.436 [2024-12-13 06:24:52.688694] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:01.436 Initializing NVMe Controllers 00:19:01.436 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.436 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:01.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:01.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:01.436 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:01.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:01.436 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:01.436 Initialization complete. Launching workers. 00:19:01.436 Starting thread on core 1 with urgent priority queue 00:19:01.436 Starting thread on core 2 with urgent priority queue 00:19:01.436 Starting thread on core 3 with urgent priority queue 00:19:01.436 Starting thread on core 0 with urgent priority queue 00:19:01.436 SPDK bdev Controller (SPDK2 ) core 0: 9859.67 IO/s 10.14 secs/100000 ios 00:19:01.436 SPDK bdev Controller (SPDK2 ) core 1: 7942.00 IO/s 12.59 secs/100000 ios 00:19:01.436 SPDK bdev Controller (SPDK2 ) core 2: 7776.67 IO/s 12.86 secs/100000 ios 00:19:01.436 SPDK bdev Controller (SPDK2 ) core 3: 8236.33 IO/s 12.14 secs/100000 ios 00:19:01.436 ======================================================== 00:19:01.436 00:19:01.436 06:24:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:01.436 [2024-12-13 06:24:52.977919] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:01.436 Initializing NVMe Controllers 00:19:01.436 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.436 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:01.436 Namespace ID: 1 size: 0GB 00:19:01.436 Initialization complete. 00:19:01.436 INFO: using host memory buffer for IO 00:19:01.436 Hello world! 00:19:01.436 [2024-12-13 06:24:52.990009] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:01.436 06:24:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:01.693 [2024-12-13 06:24:53.270187] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.066 Initializing NVMe Controllers 00:19:03.066 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:03.066 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:03.066 Initialization complete. Launching workers. 00:19:03.066 submit (in ns) avg, min, max = 6524.7, 3161.0, 3999200.0 00:19:03.066 complete (in ns) avg, min, max = 20660.9, 1754.3, 5991761.9 00:19:03.066 00:19:03.066 Submit histogram 00:19:03.066 ================ 00:19:03.066 Range in us Cumulative Count 00:19:03.066 3.154 - 3.170: 0.0060% ( 1) 00:19:03.066 3.170 - 3.185: 0.0180% ( 2) 00:19:03.066 3.185 - 3.200: 0.0722% ( 9) 00:19:03.066 3.200 - 3.215: 0.3068% ( 39) 00:19:03.066 3.215 - 3.230: 0.8904% ( 97) 00:19:03.066 3.230 - 3.246: 2.3645% ( 245) 00:19:03.066 3.246 - 3.261: 6.4978% ( 687) 00:19:03.066 3.261 - 3.276: 12.7850% ( 1045) 00:19:03.066 3.276 - 3.291: 18.9218% ( 1020) 00:19:03.066 3.291 - 3.307: 27.0321% ( 1348) 00:19:03.066 3.307 - 3.322: 34.4504% ( 1233) 00:19:03.066 3.322 - 3.337: 39.9314% ( 911) 00:19:03.066 3.337 - 3.352: 44.6724% ( 788) 00:19:03.066 3.352 - 3.368: 49.2630% ( 763) 00:19:03.066 3.368 - 3.383: 54.0641% ( 798) 00:19:03.066 3.383 - 3.398: 58.0170% ( 657) 00:19:03.066 3.398 - 3.413: 64.4125% ( 1063) 00:19:03.066 3.413 - 3.429: 70.6757% ( 1041) 00:19:03.066 3.429 - 3.444: 75.3444% ( 776) 00:19:03.066 3.444 - 3.459: 80.8736% ( 919) 00:19:03.066 3.459 - 3.474: 84.4895% ( 601) 00:19:03.066 3.474 - 3.490: 86.7337% ( 373) 00:19:03.066 3.490 - 3.505: 87.6482% ( 152) 00:19:03.066 3.505 - 3.520: 88.1295% ( 80) 00:19:03.066 3.520 - 3.535: 88.5145% ( 64) 00:19:03.066 3.535 - 3.550: 88.9958% ( 80) 00:19:03.066 3.550 - 3.566: 89.7479% ( 125) 00:19:03.066 3.566 - 3.581: 90.6023% ( 142) 00:19:03.066 3.581 - 3.596: 91.7213% ( 186) 00:19:03.066 3.596 - 3.611: 92.7321% ( 168) 00:19:03.066 3.611 - 3.627: 93.5202% ( 131) 00:19:03.066 3.627 - 3.642: 94.2061% ( 114) 00:19:03.066 3.642 - 3.657: 94.9101% ( 117) 00:19:03.066 3.657 - 3.672: 95.7163% ( 134) 00:19:03.066 3.672 - 3.688: 96.4322% ( 119) 00:19:03.066 3.688 - 3.703: 97.2505% ( 136) 00:19:03.066 3.703 - 3.718: 97.8702% ( 103) 00:19:03.066 3.718 - 3.733: 98.3755% ( 84) 00:19:03.066 3.733 - 3.749: 98.6824% ( 51) 00:19:03.066 3.749 - 3.764: 98.9652% ( 47) 00:19:03.066 3.764 - 3.779: 99.1938% ( 38) 00:19:03.066 3.779 - 3.794: 99.3683% ( 29) 00:19:03.066 3.794 - 3.810: 99.4525% ( 14) 00:19:03.066 3.810 - 3.825: 99.4946% ( 7) 00:19:03.066 3.825 - 3.840: 99.5187% ( 4) 00:19:03.066 3.840 - 3.855: 99.5247% ( 1) 00:19:03.066 3.870 - 3.886: 99.5427% ( 3) 00:19:03.066 3.901 - 3.931: 99.5488% ( 1) 00:19:03.066 3.931 - 3.962: 99.5608% ( 2) 00:19:03.066 3.962 - 3.992: 99.5668% ( 1) 00:19:03.066 3.992 - 4.023: 99.5728% ( 1) 00:19:03.066 5.029 - 5.059: 99.5788% ( 1) 00:19:03.066 5.150 - 5.181: 99.5849% ( 1) 00:19:03.066 5.211 - 5.242: 99.5909% ( 1) 00:19:03.066 5.242 - 5.272: 99.5969% ( 1) 00:19:03.066 5.333 - 5.364: 99.6089% ( 2) 00:19:03.066 5.425 - 5.455: 99.6149% ( 1) 00:19:03.066 5.455 - 5.486: 99.6270% ( 2) 00:19:03.066 5.547 - 5.577: 99.6571% ( 5) 00:19:03.066 5.577 - 5.608: 99.6631% ( 1) 00:19:03.066 5.669 - 5.699: 99.6691% ( 1) 00:19:03.066 5.699 - 5.730: 99.6751% ( 1) 00:19:03.066 5.760 - 5.790: 99.6871% ( 2) 00:19:03.066 5.790 - 5.821: 99.6932% ( 1) 00:19:03.066 5.882 - 5.912: 99.6992% ( 1) 00:19:03.066 5.912 - 5.943: 99.7112% ( 2) 00:19:03.066 5.943 - 5.973: 99.7172% ( 1) 00:19:03.066 5.973 - 6.004: 99.7293% ( 2) 00:19:03.066 6.004 - 6.034: 99.7353% ( 1) 00:19:03.066 6.065 - 6.095: 99.7413% ( 1) 00:19:03.066 6.126 - 6.156: 99.7473% ( 1) 00:19:03.066 6.187 - 6.217: 99.7533% ( 1) 00:19:03.066 6.309 - 6.339: 99.7593% ( 1) 00:19:03.066 6.400 - 6.430: 99.7654% ( 1) 00:19:03.066 6.430 - 6.461: 99.7714% ( 1) 00:19:03.066 6.522 - 6.552: 99.7774% ( 1) 00:19:03.066 6.613 - 6.644: 99.7894% ( 2) 00:19:03.066 6.674 - 6.705: 99.7954% ( 1) 00:19:03.066 6.766 - 6.796: 99.8015% ( 1) 00:19:03.066 [2024-12-13 06:24:54.363438] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.066 6.857 - 6.888: 99.8075% ( 1) 00:19:03.066 6.888 - 6.918: 99.8135% ( 1) 00:19:03.066 6.979 - 7.010: 99.8315% ( 3) 00:19:03.066 7.131 - 7.162: 99.8376% ( 1) 00:19:03.066 7.192 - 7.223: 99.8436% ( 1) 00:19:03.066 7.314 - 7.345: 99.8496% ( 1) 00:19:03.066 7.497 - 7.528: 99.8616% ( 2) 00:19:03.066 7.589 - 7.619: 99.8676% ( 1) 00:19:03.066 7.650 - 7.680: 99.8737% ( 1) 00:19:03.066 7.680 - 7.710: 99.8797% ( 1) 00:19:03.066 7.985 - 8.046: 99.8857% ( 1) 00:19:03.066 8.107 - 8.168: 99.8917% ( 1) 00:19:03.066 8.290 - 8.350: 99.8977% ( 1) 00:19:03.066 9.326 - 9.387: 99.9037% ( 1) 00:19:03.066 9.448 - 9.509: 99.9098% ( 1) 00:19:03.066 13.227 - 13.288: 99.9158% ( 1) 00:19:03.066 15.421 - 15.482: 99.9218% ( 1) 00:19:03.066 3994.575 - 4025.783: 100.0000% ( 13) 00:19:03.066 00:19:03.066 Complete histogram 00:19:03.066 ================== 00:19:03.066 Range in us Cumulative Count 00:19:03.066 1.752 - 1.760: 0.0180% ( 3) 00:19:03.066 1.760 - 1.768: 0.0481% ( 5) 00:19:03.066 1.768 - 1.775: 0.0722% ( 4) 00:19:03.066 1.775 - 1.783: 0.0963% ( 4) 00:19:03.066 1.783 - 1.790: 0.1023% ( 1) 00:19:03.066 1.790 - 1.798: 0.1143% ( 2) 00:19:03.066 1.798 - 1.806: 0.7220% ( 101) 00:19:03.066 1.806 - 1.813: 7.2077% ( 1078) 00:19:03.066 1.813 - 1.821: 27.7721% ( 3418) 00:19:03.066 1.821 - 1.829: 50.6829% ( 3808) 00:19:03.066 1.829 - 1.836: 62.5654% ( 1975) 00:19:03.066 1.836 - 1.844: 68.6722% ( 1015) 00:19:03.066 1.844 - 1.851: 72.5648% ( 647) 00:19:03.066 1.851 - 1.859: 77.6247% ( 841) 00:19:03.066 1.859 - 1.867: 85.5544% ( 1318) 00:19:03.066 1.867 - 1.874: 91.2280% ( 943) 00:19:03.066 1.874 - 1.882: 93.7790% ( 424) 00:19:03.066 1.882 - 1.890: 95.2831% ( 250) 00:19:03.066 1.890 - 1.897: 96.5405% ( 209) 00:19:03.066 1.897 - 1.905: 97.3407% ( 133) 00:19:03.066 1.905 - 1.912: 97.8822% ( 90) 00:19:03.066 1.912 - 1.920: 98.2372% ( 59) 00:19:03.066 1.920 - 1.928: 98.4658% ( 38) 00:19:03.066 1.928 - 1.935: 98.7365% ( 45) 00:19:03.066 1.935 - 1.943: 98.8930% ( 26) 00:19:03.067 1.943 - 1.950: 99.1096% ( 36) 00:19:03.067 1.950 - 1.966: 99.2540% ( 24) 00:19:03.067 1.966 - 1.981: 99.3021% ( 8) 00:19:03.067 1.981 - 1.996: 99.3382% ( 6) 00:19:03.067 1.996 - 2.011: 99.3623% ( 4) 00:19:03.067 2.011 - 2.027: 99.3683% ( 1) 00:19:03.067 2.027 - 2.042: 99.3743% ( 1) 00:19:03.067 3.398 - 3.413: 99.3803% ( 1) 00:19:03.067 3.642 - 3.657: 99.3863% ( 1) 00:19:03.067 3.703 - 3.718: 99.3923% ( 1) 00:19:03.067 3.749 - 3.764: 99.3984% ( 1) 00:19:03.067 3.764 - 3.779: 99.4044% ( 1) 00:19:03.067 3.886 - 3.901: 99.4104% ( 1) 00:19:03.067 4.175 - 4.206: 99.4164% ( 1) 00:19:03.067 4.389 - 4.419: 99.4224% ( 1) 00:19:03.067 4.450 - 4.480: 99.4284% ( 1) 00:19:03.067 4.480 - 4.510: 99.4345% ( 1) 00:19:03.067 4.602 - 4.632: 99.4405% ( 1) 00:19:03.067 4.724 - 4.754: 99.4465% ( 1) 00:19:03.067 4.968 - 4.998: 99.4525% ( 1) 00:19:03.067 4.998 - 5.029: 99.4585% ( 1) 00:19:03.067 5.181 - 5.211: 99.4645% ( 1) 00:19:03.067 5.486 - 5.516: 99.4705% ( 1) 00:19:03.067 5.760 - 5.790: 99.4766% ( 1) 00:19:03.067 5.821 - 5.851: 99.4826% ( 1) 00:19:03.067 5.851 - 5.882: 99.4886% ( 1) 00:19:03.067 5.943 - 5.973: 99.4946% ( 1) 00:19:03.067 6.400 - 6.430: 99.5006% ( 1) 00:19:03.067 6.583 - 6.613: 99.5066% ( 1) 00:19:03.067 6.613 - 6.644: 99.5127% ( 1) 00:19:03.067 6.644 - 6.674: 99.5187% ( 1) 00:19:03.067 12.861 - 12.922: 99.5247% ( 1) 00:19:03.067 13.227 - 13.288: 99.5307% ( 1) 00:19:03.067 3011.535 - 3027.139: 99.5367% ( 1) 00:19:03.067 3167.573 - 3183.177: 99.5427% ( 1) 00:19:03.067 3994.575 - 4025.783: 99.9880% ( 74) 00:19:03.067 4962.011 - 4993.219: 99.9940% ( 1) 00:19:03.067 5960.655 - 5991.863: 100.0000% ( 1) 00:19:03.067 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:03.067 [ 00:19:03.067 { 00:19:03.067 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:03.067 "subtype": "Discovery", 00:19:03.067 "listen_addresses": [], 00:19:03.067 "allow_any_host": true, 00:19:03.067 "hosts": [] 00:19:03.067 }, 00:19:03.067 { 00:19:03.067 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:03.067 "subtype": "NVMe", 00:19:03.067 "listen_addresses": [ 00:19:03.067 { 00:19:03.067 "trtype": "VFIOUSER", 00:19:03.067 "adrfam": "IPv4", 00:19:03.067 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:03.067 "trsvcid": "0" 00:19:03.067 } 00:19:03.067 ], 00:19:03.067 "allow_any_host": true, 00:19:03.067 "hosts": [], 00:19:03.067 "serial_number": "SPDK1", 00:19:03.067 "model_number": "SPDK bdev Controller", 00:19:03.067 "max_namespaces": 32, 00:19:03.067 "min_cntlid": 1, 00:19:03.067 "max_cntlid": 65519, 00:19:03.067 "namespaces": [ 00:19:03.067 { 00:19:03.067 "nsid": 1, 00:19:03.067 "bdev_name": "Malloc1", 00:19:03.067 "name": "Malloc1", 00:19:03.067 "nguid": "A08547C4272242E5B5BA6AFBD34FDD7B", 00:19:03.067 "uuid": "a08547c4-2722-42e5-b5ba-6afbd34fdd7b" 00:19:03.067 }, 00:19:03.067 { 00:19:03.067 "nsid": 2, 00:19:03.067 "bdev_name": "Malloc3", 00:19:03.067 "name": "Malloc3", 00:19:03.067 "nguid": "E97A495A8A9C4F12B034145B171A31CF", 00:19:03.067 "uuid": "e97a495a-8a9c-4f12-b034-145b171a31cf" 00:19:03.067 } 00:19:03.067 ] 00:19:03.067 }, 00:19:03.067 { 00:19:03.067 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:03.067 "subtype": "NVMe", 00:19:03.067 "listen_addresses": [ 00:19:03.067 { 00:19:03.067 "trtype": "VFIOUSER", 00:19:03.067 "adrfam": "IPv4", 00:19:03.067 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:03.067 "trsvcid": "0" 00:19:03.067 } 00:19:03.067 ], 00:19:03.067 "allow_any_host": true, 00:19:03.067 "hosts": [], 00:19:03.067 "serial_number": "SPDK2", 00:19:03.067 "model_number": "SPDK bdev Controller", 00:19:03.067 "max_namespaces": 32, 00:19:03.067 "min_cntlid": 1, 00:19:03.067 "max_cntlid": 65519, 00:19:03.067 "namespaces": [ 00:19:03.067 { 00:19:03.067 "nsid": 1, 00:19:03.067 "bdev_name": "Malloc2", 00:19:03.067 "name": "Malloc2", 00:19:03.067 "nguid": "39A3AA4361DE407B88BDE34177D48F4F", 00:19:03.067 "uuid": "39a3aa43-61de-407b-88bd-e34177d48f4f" 00:19:03.067 } 00:19:03.067 ] 00:19:03.067 } 00:19:03.067 ] 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=972282 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:03.067 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:03.325 [2024-12-13 06:24:54.759898] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:03.325 Malloc4 00:19:03.325 06:24:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:03.583 [2024-12-13 06:24:55.001730] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:03.583 Asynchronous Event Request test 00:19:03.583 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:03.583 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:03.583 Registering asynchronous event callbacks... 00:19:03.583 Starting namespace attribute notice tests for all controllers... 00:19:03.583 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:03.583 aer_cb - Changed Namespace 00:19:03.583 Cleaning up... 00:19:03.583 [ 00:19:03.583 { 00:19:03.583 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:03.583 "subtype": "Discovery", 00:19:03.583 "listen_addresses": [], 00:19:03.583 "allow_any_host": true, 00:19:03.583 "hosts": [] 00:19:03.583 }, 00:19:03.583 { 00:19:03.583 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:03.583 "subtype": "NVMe", 00:19:03.583 "listen_addresses": [ 00:19:03.583 { 00:19:03.583 "trtype": "VFIOUSER", 00:19:03.583 "adrfam": "IPv4", 00:19:03.583 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:03.583 "trsvcid": "0" 00:19:03.583 } 00:19:03.583 ], 00:19:03.583 "allow_any_host": true, 00:19:03.583 "hosts": [], 00:19:03.583 "serial_number": "SPDK1", 00:19:03.583 "model_number": "SPDK bdev Controller", 00:19:03.583 "max_namespaces": 32, 00:19:03.583 "min_cntlid": 1, 00:19:03.583 "max_cntlid": 65519, 00:19:03.583 "namespaces": [ 00:19:03.583 { 00:19:03.583 "nsid": 1, 00:19:03.583 "bdev_name": "Malloc1", 00:19:03.583 "name": "Malloc1", 00:19:03.583 "nguid": "A08547C4272242E5B5BA6AFBD34FDD7B", 00:19:03.583 "uuid": "a08547c4-2722-42e5-b5ba-6afbd34fdd7b" 00:19:03.583 }, 00:19:03.583 { 00:19:03.583 "nsid": 2, 00:19:03.583 "bdev_name": "Malloc3", 00:19:03.583 "name": "Malloc3", 00:19:03.583 "nguid": "E97A495A8A9C4F12B034145B171A31CF", 00:19:03.583 "uuid": "e97a495a-8a9c-4f12-b034-145b171a31cf" 00:19:03.583 } 00:19:03.583 ] 00:19:03.583 }, 00:19:03.583 { 00:19:03.583 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:03.583 "subtype": "NVMe", 00:19:03.583 "listen_addresses": [ 00:19:03.583 { 00:19:03.583 "trtype": "VFIOUSER", 00:19:03.583 "adrfam": "IPv4", 00:19:03.583 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:03.583 "trsvcid": "0" 00:19:03.583 } 00:19:03.583 ], 00:19:03.583 "allow_any_host": true, 00:19:03.583 "hosts": [], 00:19:03.583 "serial_number": "SPDK2", 00:19:03.583 "model_number": "SPDK bdev Controller", 00:19:03.583 "max_namespaces": 32, 00:19:03.583 "min_cntlid": 1, 00:19:03.583 "max_cntlid": 65519, 00:19:03.583 "namespaces": [ 00:19:03.583 { 00:19:03.583 "nsid": 1, 00:19:03.583 "bdev_name": "Malloc2", 00:19:03.583 "name": "Malloc2", 00:19:03.583 "nguid": "39A3AA4361DE407B88BDE34177D48F4F", 00:19:03.583 "uuid": "39a3aa43-61de-407b-88bd-e34177d48f4f" 00:19:03.583 }, 00:19:03.583 { 00:19:03.583 "nsid": 2, 00:19:03.583 "bdev_name": "Malloc4", 00:19:03.583 "name": "Malloc4", 00:19:03.583 "nguid": "66A06ECB523E4673AF11F414F17A301F", 00:19:03.583 "uuid": "66a06ecb-523e-4673-af11-f414f17a301f" 00:19:03.583 } 00:19:03.583 ] 00:19:03.583 } 00:19:03.583 ] 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 972282 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 964473 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 964473 ']' 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 964473 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.583 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 964473 00:19:03.841 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.841 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.841 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 964473' 00:19:03.841 killing process with pid 964473 00:19:03.841 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 964473 00:19:03.841 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 964473 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=972511 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 972511' 00:19:04.100 Process pid: 972511 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 972511 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 972511 ']' 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.100 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.100 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:04.100 [2024-12-13 06:24:55.555172] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:04.100 [2024-12-13 06:24:55.555991] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:04.100 [2024-12-13 06:24:55.556027] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:04.100 [2024-12-13 06:24:55.633116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:04.100 [2024-12-13 06:24:55.655888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:04.100 [2024-12-13 06:24:55.655928] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:04.100 [2024-12-13 06:24:55.655936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:04.101 [2024-12-13 06:24:55.655942] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:04.101 [2024-12-13 06:24:55.655947] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:04.101 [2024-12-13 06:24:55.659467] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:04.101 [2024-12-13 06:24:55.659518] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:04.101 [2024-12-13 06:24:55.659628] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.101 [2024-12-13 06:24:55.659629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:04.101 [2024-12-13 06:24:55.723552] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:04.101 [2024-12-13 06:24:55.724117] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:04.101 [2024-12-13 06:24:55.724546] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:04.101 [2024-12-13 06:24:55.724969] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:04.101 [2024-12-13 06:24:55.725005] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:04.360 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:04.360 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:04.360 06:24:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:05.296 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:05.555 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:05.555 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:05.555 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:05.555 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:05.555 06:24:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:05.555 Malloc1 00:19:05.556 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:05.814 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:06.072 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:06.330 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:06.330 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:06.330 06:24:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:06.330 Malloc2 00:19:06.587 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:06.587 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:06.845 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 972511 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 972511 ']' 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 972511 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 972511 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 972511' 00:19:07.103 killing process with pid 972511 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 972511 00:19:07.103 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 972511 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:07.362 00:19:07.362 real 0m50.744s 00:19:07.362 user 3m16.456s 00:19:07.362 sys 0m3.202s 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:07.362 ************************************ 00:19:07.362 END TEST nvmf_vfio_user 00:19:07.362 ************************************ 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:07.362 ************************************ 00:19:07.362 START TEST nvmf_vfio_user_nvme_compliance 00:19:07.362 ************************************ 00:19:07.362 06:24:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:07.362 * Looking for test storage... 00:19:07.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:07.362 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:07.362 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:07.362 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:07.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.621 --rc genhtml_branch_coverage=1 00:19:07.621 --rc genhtml_function_coverage=1 00:19:07.621 --rc genhtml_legend=1 00:19:07.621 --rc geninfo_all_blocks=1 00:19:07.621 --rc geninfo_unexecuted_blocks=1 00:19:07.621 00:19:07.621 ' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:07.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.621 --rc genhtml_branch_coverage=1 00:19:07.621 --rc genhtml_function_coverage=1 00:19:07.621 --rc genhtml_legend=1 00:19:07.621 --rc geninfo_all_blocks=1 00:19:07.621 --rc geninfo_unexecuted_blocks=1 00:19:07.621 00:19:07.621 ' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:07.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.621 --rc genhtml_branch_coverage=1 00:19:07.621 --rc genhtml_function_coverage=1 00:19:07.621 --rc genhtml_legend=1 00:19:07.621 --rc geninfo_all_blocks=1 00:19:07.621 --rc geninfo_unexecuted_blocks=1 00:19:07.621 00:19:07.621 ' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:07.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.621 --rc genhtml_branch_coverage=1 00:19:07.621 --rc genhtml_function_coverage=1 00:19:07.621 --rc genhtml_legend=1 00:19:07.621 --rc geninfo_all_blocks=1 00:19:07.621 --rc geninfo_unexecuted_blocks=1 00:19:07.621 00:19:07.621 ' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.621 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:07.622 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=973051 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 973051' 00:19:07.622 Process pid: 973051 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 973051 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 973051 ']' 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:07.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:07.622 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:07.622 [2024-12-13 06:24:59.172949] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:19:07.622 [2024-12-13 06:24:59.172995] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:07.622 [2024-12-13 06:24:59.245223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:07.622 [2024-12-13 06:24:59.266790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:07.622 [2024-12-13 06:24:59.266826] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:07.622 [2024-12-13 06:24:59.266834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:07.622 [2024-12-13 06:24:59.266839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:07.622 [2024-12-13 06:24:59.266844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:07.622 [2024-12-13 06:24:59.268112] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:07.622 [2024-12-13 06:24:59.268218] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:07.622 [2024-12-13 06:24:59.268217] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.880 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:07.880 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:07.880 06:24:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.814 malloc0 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.814 06:25:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:09.073 00:19:09.073 00:19:09.073 CUnit - A unit testing framework for C - Version 2.1-3 00:19:09.073 http://cunit.sourceforge.net/ 00:19:09.073 00:19:09.073 00:19:09.073 Suite: nvme_compliance 00:19:09.073 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-13 06:25:00.602902] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.073 [2024-12-13 06:25:00.604226] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:09.073 [2024-12-13 06:25:00.604242] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:09.073 [2024-12-13 06:25:00.604248] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:09.073 [2024-12-13 06:25:00.606930] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.073 passed 00:19:09.073 Test: admin_identify_ctrlr_verify_fused ...[2024-12-13 06:25:00.684488] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.073 [2024-12-13 06:25:00.688509] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.073 passed 00:19:09.331 Test: admin_identify_ns ...[2024-12-13 06:25:00.767768] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.331 [2024-12-13 06:25:00.828470] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:09.331 [2024-12-13 06:25:00.836460] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:09.331 [2024-12-13 06:25:00.857539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.331 passed 00:19:09.331 Test: admin_get_features_mandatory_features ...[2024-12-13 06:25:00.934126] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.331 [2024-12-13 06:25:00.937148] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.331 passed 00:19:09.589 Test: admin_get_features_optional_features ...[2024-12-13 06:25:01.010661] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.589 [2024-12-13 06:25:01.014688] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.589 passed 00:19:09.589 Test: admin_set_features_number_of_queues ...[2024-12-13 06:25:01.089677] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.589 [2024-12-13 06:25:01.198536] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.589 passed 00:19:09.848 Test: admin_get_log_page_mandatory_logs ...[2024-12-13 06:25:01.271080] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.848 [2024-12-13 06:25:01.274102] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.848 passed 00:19:09.848 Test: admin_get_log_page_with_lpo ...[2024-12-13 06:25:01.350741] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:09.848 [2024-12-13 06:25:01.418459] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:09.848 [2024-12-13 06:25:01.431529] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:09.848 passed 00:19:10.106 Test: fabric_property_get ...[2024-12-13 06:25:01.508506] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.106 [2024-12-13 06:25:01.509737] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:10.106 [2024-12-13 06:25:01.511526] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.106 passed 00:19:10.106 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-13 06:25:01.586059] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.106 [2024-12-13 06:25:01.587302] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:10.106 [2024-12-13 06:25:01.589083] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.106 passed 00:19:10.106 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-13 06:25:01.667710] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.106 [2024-12-13 06:25:01.751454] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.364 [2024-12-13 06:25:01.767454] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.364 [2024-12-13 06:25:01.772532] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.364 passed 00:19:10.364 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-13 06:25:01.846104] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.364 [2024-12-13 06:25:01.847335] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:10.364 [2024-12-13 06:25:01.849117] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.364 passed 00:19:10.364 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-13 06:25:01.925854] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.364 [2024-12-13 06:25:02.002457] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:10.622 [2024-12-13 06:25:02.026465] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:10.622 [2024-12-13 06:25:02.031542] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.622 passed 00:19:10.622 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-13 06:25:02.108105] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.622 [2024-12-13 06:25:02.109339] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:10.622 [2024-12-13 06:25:02.109366] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:10.622 [2024-12-13 06:25:02.111124] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.622 passed 00:19:10.622 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-13 06:25:02.185660] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.881 [2024-12-13 06:25:02.281483] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:10.881 [2024-12-13 06:25:02.289464] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:10.881 [2024-12-13 06:25:02.297471] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:10.881 [2024-12-13 06:25:02.305458] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:10.881 [2024-12-13 06:25:02.334539] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.881 passed 00:19:10.881 Test: admin_create_io_sq_verify_pc ...[2024-12-13 06:25:02.407164] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:10.881 [2024-12-13 06:25:02.423467] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:10.881 [2024-12-13 06:25:02.441332] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:10.881 passed 00:19:10.881 Test: admin_create_io_qp_max_qps ...[2024-12-13 06:25:02.515881] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:12.254 [2024-12-13 06:25:03.607458] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:12.513 [2024-12-13 06:25:03.986667] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:12.513 passed 00:19:12.513 Test: admin_create_io_sq_shared_cq ...[2024-12-13 06:25:04.063702] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:12.773 [2024-12-13 06:25:04.194457] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:12.773 [2024-12-13 06:25:04.231513] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:12.773 passed 00:19:12.773 00:19:12.773 Run Summary: Type Total Ran Passed Failed Inactive 00:19:12.773 suites 1 1 n/a 0 0 00:19:12.773 tests 18 18 18 0 0 00:19:12.773 asserts 360 360 360 0 n/a 00:19:12.773 00:19:12.773 Elapsed time = 1.489 seconds 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 973051 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 973051 ']' 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 973051 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 973051 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 973051' 00:19:12.773 killing process with pid 973051 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 973051 00:19:12.773 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 973051 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:13.032 00:19:13.032 real 0m5.582s 00:19:13.032 user 0m15.648s 00:19:13.032 sys 0m0.520s 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:13.032 ************************************ 00:19:13.032 END TEST nvmf_vfio_user_nvme_compliance 00:19:13.032 ************************************ 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:13.032 ************************************ 00:19:13.032 START TEST nvmf_vfio_user_fuzz 00:19:13.032 ************************************ 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:13.032 * Looking for test storage... 00:19:13.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:13.032 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:13.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.292 --rc genhtml_branch_coverage=1 00:19:13.292 --rc genhtml_function_coverage=1 00:19:13.292 --rc genhtml_legend=1 00:19:13.292 --rc geninfo_all_blocks=1 00:19:13.292 --rc geninfo_unexecuted_blocks=1 00:19:13.292 00:19:13.292 ' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:13.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.292 --rc genhtml_branch_coverage=1 00:19:13.292 --rc genhtml_function_coverage=1 00:19:13.292 --rc genhtml_legend=1 00:19:13.292 --rc geninfo_all_blocks=1 00:19:13.292 --rc geninfo_unexecuted_blocks=1 00:19:13.292 00:19:13.292 ' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:13.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.292 --rc genhtml_branch_coverage=1 00:19:13.292 --rc genhtml_function_coverage=1 00:19:13.292 --rc genhtml_legend=1 00:19:13.292 --rc geninfo_all_blocks=1 00:19:13.292 --rc geninfo_unexecuted_blocks=1 00:19:13.292 00:19:13.292 ' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:13.292 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:13.292 --rc genhtml_branch_coverage=1 00:19:13.292 --rc genhtml_function_coverage=1 00:19:13.292 --rc genhtml_legend=1 00:19:13.292 --rc geninfo_all_blocks=1 00:19:13.292 --rc geninfo_unexecuted_blocks=1 00:19:13.292 00:19:13.292 ' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:13.292 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:13.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=974006 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 974006' 00:19:13.293 Process pid: 974006 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 974006 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 974006 ']' 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:13.293 06:25:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:13.551 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.551 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:13.551 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.486 malloc0 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:14.486 06:25:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:19:46.557 Fuzzing completed. Shutting down the fuzz application 00:19:46.558 00:19:46.558 Dumping successful admin opcodes: 00:19:46.558 9, 10, 00:19:46.558 Dumping successful io opcodes: 00:19:46.558 0, 00:19:46.558 NS: 0x20000081ef00 I/O qp, Total commands completed: 1139509, total successful commands: 4489, random_seed: 3771035008 00:19:46.558 NS: 0x20000081ef00 admin qp, Total commands completed: 281168, total successful commands: 65, random_seed: 2429898112 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 974006 ']' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 974006' 00:19:46.558 killing process with pid 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 974006 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:19:46.558 00:19:46.558 real 0m32.188s 00:19:46.558 user 0m34.369s 00:19:46.558 sys 0m26.672s 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:46.558 ************************************ 00:19:46.558 END TEST nvmf_vfio_user_fuzz 00:19:46.558 ************************************ 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:46.558 ************************************ 00:19:46.558 START TEST nvmf_auth_target 00:19:46.558 ************************************ 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:46.558 * Looking for test storage... 00:19:46.558 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:46.558 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.558 --rc genhtml_branch_coverage=1 00:19:46.558 --rc genhtml_function_coverage=1 00:19:46.558 --rc genhtml_legend=1 00:19:46.558 --rc geninfo_all_blocks=1 00:19:46.558 --rc geninfo_unexecuted_blocks=1 00:19:46.558 00:19:46.558 ' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.558 --rc genhtml_branch_coverage=1 00:19:46.558 --rc genhtml_function_coverage=1 00:19:46.558 --rc genhtml_legend=1 00:19:46.558 --rc geninfo_all_blocks=1 00:19:46.558 --rc geninfo_unexecuted_blocks=1 00:19:46.558 00:19:46.558 ' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.558 --rc genhtml_branch_coverage=1 00:19:46.558 --rc genhtml_function_coverage=1 00:19:46.558 --rc genhtml_legend=1 00:19:46.558 --rc geninfo_all_blocks=1 00:19:46.558 --rc geninfo_unexecuted_blocks=1 00:19:46.558 00:19:46.558 ' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:46.558 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:46.558 --rc genhtml_branch_coverage=1 00:19:46.558 --rc genhtml_function_coverage=1 00:19:46.558 --rc genhtml_legend=1 00:19:46.558 --rc geninfo_all_blocks=1 00:19:46.558 --rc geninfo_unexecuted_blocks=1 00:19:46.558 00:19:46.558 ' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:46.558 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:46.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:19:46.559 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:19:51.833 Found 0000:af:00.0 (0x8086 - 0x159b) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:19:51.833 Found 0000:af:00.1 (0x8086 - 0x159b) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:19:51.833 Found net devices under 0000:af:00.0: cvl_0_0 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:19:51.833 Found net devices under 0000:af:00.1: cvl_0_1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:51.833 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:19:51.834 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:51.834 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.439 ms 00:19:51.834 00:19:51.834 --- 10.0.0.2 ping statistics --- 00:19:51.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.834 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:51.834 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:51.834 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:19:51.834 00:19:51.834 --- 10.0.0.1 ping statistics --- 00:19:51.834 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:51.834 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=982318 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 982318 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982318 ']' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:51.834 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=982347 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=87d57bc9ad4b6ec8c40803f3c50e275876d0043517052990 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.QmK 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 87d57bc9ad4b6ec8c40803f3c50e275876d0043517052990 0 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 87d57bc9ad4b6ec8c40803f3c50e275876d0043517052990 0 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=87d57bc9ad4b6ec8c40803f3c50e275876d0043517052990 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.QmK 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.QmK 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.QmK 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=d5ab9ee1a3078fbc38ba782327c154c0c6ddddf2c2692e2851f8e0bc944cb10a 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.WAj 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key d5ab9ee1a3078fbc38ba782327c154c0c6ddddf2c2692e2851f8e0bc944cb10a 3 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 d5ab9ee1a3078fbc38ba782327c154c0c6ddddf2c2692e2851f8e0bc944cb10a 3 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=d5ab9ee1a3078fbc38ba782327c154c0c6ddddf2c2692e2851f8e0bc944cb10a 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.WAj 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.WAj 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.WAj 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=8e1b339956ed6fae99eb7461775b4db2 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:51.834 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.11W 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 8e1b339956ed6fae99eb7461775b4db2 1 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 8e1b339956ed6fae99eb7461775b4db2 1 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=8e1b339956ed6fae99eb7461775b4db2 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.11W 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.11W 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.11W 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cde5d9b2dd5e8c0b2d89f9d48f4a3425ead06a611959bde3 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.R7q 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cde5d9b2dd5e8c0b2d89f9d48f4a3425ead06a611959bde3 2 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cde5d9b2dd5e8c0b2d89f9d48f4a3425ead06a611959bde3 2 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cde5d9b2dd5e8c0b2d89f9d48f4a3425ead06a611959bde3 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.R7q 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.R7q 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.R7q 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:51.835 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=1e977742aef01b0475ba389e4a25c060c85d75f0414583b7 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.OhD 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 1e977742aef01b0475ba389e4a25c060c85d75f0414583b7 2 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 1e977742aef01b0475ba389e4a25c060c85d75f0414583b7 2 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=1e977742aef01b0475ba389e4a25c060c85d75f0414583b7 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.OhD 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.OhD 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.OhD 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=696ca4f96a364f23535234fdc907532c 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.G0h 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 696ca4f96a364f23535234fdc907532c 1 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 696ca4f96a364f23535234fdc907532c 1 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=696ca4f96a364f23535234fdc907532c 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.G0h 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.G0h 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.G0h 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=63760d36909e3e3881fb41c0993245458f643685e4a9028cc099b85eaef12e13 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:19:52.094 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D9p 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 63760d36909e3e3881fb41c0993245458f643685e4a9028cc099b85eaef12e13 3 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 63760d36909e3e3881fb41c0993245458f643685e4a9028cc099b85eaef12e13 3 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=63760d36909e3e3881fb41c0993245458f643685e4a9028cc099b85eaef12e13 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D9p 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D9p 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.D9p 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 982318 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982318 ']' 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:52.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.095 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 982347 /var/tmp/host.sock 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 982347 ']' 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:19:52.353 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:52.354 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:52.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:52.354 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:52.354 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QmK 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.QmK 00:19:52.612 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.QmK 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.WAj ]] 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WAj 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WAj 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WAj 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.11W 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.11W 00:19:52.870 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.11W 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.R7q ]] 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R7q 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R7q 00:19:53.129 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R7q 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OhD 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.OhD 00:19:53.387 06:25:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.OhD 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.G0h ]] 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G0h 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G0h 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G0h 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.D9p 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.D9p 00:19:53.646 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.D9p 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:53.906 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.164 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:54.423 00:19:54.423 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.423 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:54.423 06:25:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.681 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.681 { 00:19:54.681 "cntlid": 1, 00:19:54.681 "qid": 0, 00:19:54.681 "state": "enabled", 00:19:54.681 "thread": "nvmf_tgt_poll_group_000", 00:19:54.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:54.682 "listen_address": { 00:19:54.682 "trtype": "TCP", 00:19:54.682 "adrfam": "IPv4", 00:19:54.682 "traddr": "10.0.0.2", 00:19:54.682 "trsvcid": "4420" 00:19:54.682 }, 00:19:54.682 "peer_address": { 00:19:54.682 "trtype": "TCP", 00:19:54.682 "adrfam": "IPv4", 00:19:54.682 "traddr": "10.0.0.1", 00:19:54.682 "trsvcid": "50336" 00:19:54.682 }, 00:19:54.682 "auth": { 00:19:54.682 "state": "completed", 00:19:54.682 "digest": "sha256", 00:19:54.682 "dhgroup": "null" 00:19:54.682 } 00:19:54.682 } 00:19:54.682 ]' 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.682 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.940 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:19:54.940 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:19:55.507 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.507 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.766 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:56.034 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.034 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.383 { 00:19:56.383 "cntlid": 3, 00:19:56.383 "qid": 0, 00:19:56.383 "state": "enabled", 00:19:56.383 "thread": "nvmf_tgt_poll_group_000", 00:19:56.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:56.383 "listen_address": { 00:19:56.383 "trtype": "TCP", 00:19:56.383 "adrfam": "IPv4", 00:19:56.383 "traddr": "10.0.0.2", 00:19:56.383 "trsvcid": "4420" 00:19:56.383 }, 00:19:56.383 "peer_address": { 00:19:56.383 "trtype": "TCP", 00:19:56.383 "adrfam": "IPv4", 00:19:56.383 "traddr": "10.0.0.1", 00:19:56.383 "trsvcid": "40146" 00:19:56.383 }, 00:19:56.383 "auth": { 00:19:56.383 "state": "completed", 00:19:56.383 "digest": "sha256", 00:19:56.383 "dhgroup": "null" 00:19:56.383 } 00:19:56.383 } 00:19:56.383 ]' 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.383 06:25:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.383 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:19:56.383 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:56.990 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.248 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:57.505 00:19:57.505 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:57.505 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:57.505 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:57.763 { 00:19:57.763 "cntlid": 5, 00:19:57.763 "qid": 0, 00:19:57.763 "state": "enabled", 00:19:57.763 "thread": "nvmf_tgt_poll_group_000", 00:19:57.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:57.763 "listen_address": { 00:19:57.763 "trtype": "TCP", 00:19:57.763 "adrfam": "IPv4", 00:19:57.763 "traddr": "10.0.0.2", 00:19:57.763 "trsvcid": "4420" 00:19:57.763 }, 00:19:57.763 "peer_address": { 00:19:57.763 "trtype": "TCP", 00:19:57.763 "adrfam": "IPv4", 00:19:57.763 "traddr": "10.0.0.1", 00:19:57.763 "trsvcid": "40162" 00:19:57.763 }, 00:19:57.763 "auth": { 00:19:57.763 "state": "completed", 00:19:57.763 "digest": "sha256", 00:19:57.763 "dhgroup": "null" 00:19:57.763 } 00:19:57.763 } 00:19:57.763 ]' 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:57.763 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.764 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.022 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:19:58.022 06:25:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.589 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:58.848 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:59.107 00:19:59.107 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:59.107 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:59.107 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:59.366 { 00:19:59.366 "cntlid": 7, 00:19:59.366 "qid": 0, 00:19:59.366 "state": "enabled", 00:19:59.366 "thread": "nvmf_tgt_poll_group_000", 00:19:59.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:19:59.366 "listen_address": { 00:19:59.366 "trtype": "TCP", 00:19:59.366 "adrfam": "IPv4", 00:19:59.366 "traddr": "10.0.0.2", 00:19:59.366 "trsvcid": "4420" 00:19:59.366 }, 00:19:59.366 "peer_address": { 00:19:59.366 "trtype": "TCP", 00:19:59.366 "adrfam": "IPv4", 00:19:59.366 "traddr": "10.0.0.1", 00:19:59.366 "trsvcid": "40188" 00:19:59.366 }, 00:19:59.366 "auth": { 00:19:59.366 "state": "completed", 00:19:59.366 "digest": "sha256", 00:19:59.366 "dhgroup": "null" 00:19:59.366 } 00:19:59.366 } 00:19:59.366 ]' 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.366 06:25:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.624 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:19:59.625 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.192 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.451 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.451 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.451 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.451 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.451 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:00.710 00:20:00.710 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.710 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.710 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.969 { 00:20:00.969 "cntlid": 9, 00:20:00.969 "qid": 0, 00:20:00.969 "state": "enabled", 00:20:00.969 "thread": "nvmf_tgt_poll_group_000", 00:20:00.969 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:00.969 "listen_address": { 00:20:00.969 "trtype": "TCP", 00:20:00.969 "adrfam": "IPv4", 00:20:00.969 "traddr": "10.0.0.2", 00:20:00.969 "trsvcid": "4420" 00:20:00.969 }, 00:20:00.969 "peer_address": { 00:20:00.969 "trtype": "TCP", 00:20:00.969 "adrfam": "IPv4", 00:20:00.969 "traddr": "10.0.0.1", 00:20:00.969 "trsvcid": "40226" 00:20:00.969 }, 00:20:00.969 "auth": { 00:20:00.969 "state": "completed", 00:20:00.969 "digest": "sha256", 00:20:00.969 "dhgroup": "ffdhe2048" 00:20:00.969 } 00:20:00.969 } 00:20:00.969 ]' 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.969 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.228 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:01.228 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.795 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:01.795 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.054 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:02.317 00:20:02.317 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.317 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.317 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.576 { 00:20:02.576 "cntlid": 11, 00:20:02.576 "qid": 0, 00:20:02.576 "state": "enabled", 00:20:02.576 "thread": "nvmf_tgt_poll_group_000", 00:20:02.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:02.576 "listen_address": { 00:20:02.576 "trtype": "TCP", 00:20:02.576 "adrfam": "IPv4", 00:20:02.576 "traddr": "10.0.0.2", 00:20:02.576 "trsvcid": "4420" 00:20:02.576 }, 00:20:02.576 "peer_address": { 00:20:02.576 "trtype": "TCP", 00:20:02.576 "adrfam": "IPv4", 00:20:02.576 "traddr": "10.0.0.1", 00:20:02.576 "trsvcid": "40248" 00:20:02.576 }, 00:20:02.576 "auth": { 00:20:02.576 "state": "completed", 00:20:02.576 "digest": "sha256", 00:20:02.576 "dhgroup": "ffdhe2048" 00:20:02.576 } 00:20:02.576 } 00:20:02.576 ]' 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.576 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.835 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:02.835 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.402 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.403 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.660 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.918 00:20:03.918 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.918 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.918 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.176 { 00:20:04.176 "cntlid": 13, 00:20:04.176 "qid": 0, 00:20:04.176 "state": "enabled", 00:20:04.176 "thread": "nvmf_tgt_poll_group_000", 00:20:04.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:04.176 "listen_address": { 00:20:04.176 "trtype": "TCP", 00:20:04.176 "adrfam": "IPv4", 00:20:04.176 "traddr": "10.0.0.2", 00:20:04.176 "trsvcid": "4420" 00:20:04.176 }, 00:20:04.176 "peer_address": { 00:20:04.176 "trtype": "TCP", 00:20:04.176 "adrfam": "IPv4", 00:20:04.176 "traddr": "10.0.0.1", 00:20:04.176 "trsvcid": "40274" 00:20:04.176 }, 00:20:04.176 "auth": { 00:20:04.176 "state": "completed", 00:20:04.176 "digest": "sha256", 00:20:04.176 "dhgroup": "ffdhe2048" 00:20:04.176 } 00:20:04.176 } 00:20:04.176 ]' 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.176 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.435 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:04.435 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.002 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.261 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:05.520 00:20:05.520 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.520 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.520 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.520 { 00:20:05.520 "cntlid": 15, 00:20:05.520 "qid": 0, 00:20:05.520 "state": "enabled", 00:20:05.520 "thread": "nvmf_tgt_poll_group_000", 00:20:05.520 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:05.520 "listen_address": { 00:20:05.520 "trtype": "TCP", 00:20:05.520 "adrfam": "IPv4", 00:20:05.520 "traddr": "10.0.0.2", 00:20:05.520 "trsvcid": "4420" 00:20:05.520 }, 00:20:05.520 "peer_address": { 00:20:05.520 "trtype": "TCP", 00:20:05.520 "adrfam": "IPv4", 00:20:05.520 "traddr": "10.0.0.1", 00:20:05.520 "trsvcid": "39610" 00:20:05.520 }, 00:20:05.520 "auth": { 00:20:05.520 "state": "completed", 00:20:05.520 "digest": "sha256", 00:20:05.520 "dhgroup": "ffdhe2048" 00:20:05.520 } 00:20:05.520 } 00:20:05.520 ]' 00:20:05.520 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.779 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.037 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:06.038 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:06.605 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.606 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:06.864 00:20:06.864 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:06.864 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:06.864 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.123 { 00:20:07.123 "cntlid": 17, 00:20:07.123 "qid": 0, 00:20:07.123 "state": "enabled", 00:20:07.123 "thread": "nvmf_tgt_poll_group_000", 00:20:07.123 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:07.123 "listen_address": { 00:20:07.123 "trtype": "TCP", 00:20:07.123 "adrfam": "IPv4", 00:20:07.123 "traddr": "10.0.0.2", 00:20:07.123 "trsvcid": "4420" 00:20:07.123 }, 00:20:07.123 "peer_address": { 00:20:07.123 "trtype": "TCP", 00:20:07.123 "adrfam": "IPv4", 00:20:07.123 "traddr": "10.0.0.1", 00:20:07.123 "trsvcid": "39638" 00:20:07.123 }, 00:20:07.123 "auth": { 00:20:07.123 "state": "completed", 00:20:07.123 "digest": "sha256", 00:20:07.123 "dhgroup": "ffdhe3072" 00:20:07.123 } 00:20:07.123 } 00:20:07.123 ]' 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:07.123 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.382 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.382 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.382 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.382 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.382 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.382 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:07.382 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.951 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:07.951 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.210 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.469 00:20:08.469 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:08.469 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:08.469 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:08.728 { 00:20:08.728 "cntlid": 19, 00:20:08.728 "qid": 0, 00:20:08.728 "state": "enabled", 00:20:08.728 "thread": "nvmf_tgt_poll_group_000", 00:20:08.728 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:08.728 "listen_address": { 00:20:08.728 "trtype": "TCP", 00:20:08.728 "adrfam": "IPv4", 00:20:08.728 "traddr": "10.0.0.2", 00:20:08.728 "trsvcid": "4420" 00:20:08.728 }, 00:20:08.728 "peer_address": { 00:20:08.728 "trtype": "TCP", 00:20:08.728 "adrfam": "IPv4", 00:20:08.728 "traddr": "10.0.0.1", 00:20:08.728 "trsvcid": "39656" 00:20:08.728 }, 00:20:08.728 "auth": { 00:20:08.728 "state": "completed", 00:20:08.728 "digest": "sha256", 00:20:08.728 "dhgroup": "ffdhe3072" 00:20:08.728 } 00:20:08.728 } 00:20:08.728 ]' 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:08.728 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:08.987 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:08.987 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:08.987 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.987 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:08.987 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.555 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.813 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:10.071 00:20:10.072 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:10.072 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:10.072 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.330 { 00:20:10.330 "cntlid": 21, 00:20:10.330 "qid": 0, 00:20:10.330 "state": "enabled", 00:20:10.330 "thread": "nvmf_tgt_poll_group_000", 00:20:10.330 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:10.330 "listen_address": { 00:20:10.330 "trtype": "TCP", 00:20:10.330 "adrfam": "IPv4", 00:20:10.330 "traddr": "10.0.0.2", 00:20:10.330 "trsvcid": "4420" 00:20:10.330 }, 00:20:10.330 "peer_address": { 00:20:10.330 "trtype": "TCP", 00:20:10.330 "adrfam": "IPv4", 00:20:10.330 "traddr": "10.0.0.1", 00:20:10.330 "trsvcid": "39682" 00:20:10.330 }, 00:20:10.330 "auth": { 00:20:10.330 "state": "completed", 00:20:10.330 "digest": "sha256", 00:20:10.330 "dhgroup": "ffdhe3072" 00:20:10.330 } 00:20:10.330 } 00:20:10.330 ]' 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.330 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.589 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.589 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.589 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.589 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:10.589 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.157 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.157 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.415 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.674 00:20:11.674 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:11.674 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.674 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:11.932 { 00:20:11.932 "cntlid": 23, 00:20:11.932 "qid": 0, 00:20:11.932 "state": "enabled", 00:20:11.932 "thread": "nvmf_tgt_poll_group_000", 00:20:11.932 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:11.932 "listen_address": { 00:20:11.932 "trtype": "TCP", 00:20:11.932 "adrfam": "IPv4", 00:20:11.932 "traddr": "10.0.0.2", 00:20:11.932 "trsvcid": "4420" 00:20:11.932 }, 00:20:11.932 "peer_address": { 00:20:11.932 "trtype": "TCP", 00:20:11.932 "adrfam": "IPv4", 00:20:11.932 "traddr": "10.0.0.1", 00:20:11.932 "trsvcid": "39712" 00:20:11.932 }, 00:20:11.932 "auth": { 00:20:11.932 "state": "completed", 00:20:11.932 "digest": "sha256", 00:20:11.932 "dhgroup": "ffdhe3072" 00:20:11.932 } 00:20:11.932 } 00:20:11.932 ]' 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.932 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.191 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:12.191 06:26:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.759 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:12.759 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.018 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.277 00:20:13.277 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.277 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.277 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.535 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.536 { 00:20:13.536 "cntlid": 25, 00:20:13.536 "qid": 0, 00:20:13.536 "state": "enabled", 00:20:13.536 "thread": "nvmf_tgt_poll_group_000", 00:20:13.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.536 "listen_address": { 00:20:13.536 "trtype": "TCP", 00:20:13.536 "adrfam": "IPv4", 00:20:13.536 "traddr": "10.0.0.2", 00:20:13.536 "trsvcid": "4420" 00:20:13.536 }, 00:20:13.536 "peer_address": { 00:20:13.536 "trtype": "TCP", 00:20:13.536 "adrfam": "IPv4", 00:20:13.536 "traddr": "10.0.0.1", 00:20:13.536 "trsvcid": "39750" 00:20:13.536 }, 00:20:13.536 "auth": { 00:20:13.536 "state": "completed", 00:20:13.536 "digest": "sha256", 00:20:13.536 "dhgroup": "ffdhe4096" 00:20:13.536 } 00:20:13.536 } 00:20:13.536 ]' 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.536 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.794 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:13.794 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.361 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.361 06:26:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.620 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.879 00:20:14.879 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.879 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.879 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.138 { 00:20:15.138 "cntlid": 27, 00:20:15.138 "qid": 0, 00:20:15.138 "state": "enabled", 00:20:15.138 "thread": "nvmf_tgt_poll_group_000", 00:20:15.138 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.138 "listen_address": { 00:20:15.138 "trtype": "TCP", 00:20:15.138 "adrfam": "IPv4", 00:20:15.138 "traddr": "10.0.0.2", 00:20:15.138 "trsvcid": "4420" 00:20:15.138 }, 00:20:15.138 "peer_address": { 00:20:15.138 "trtype": "TCP", 00:20:15.138 "adrfam": "IPv4", 00:20:15.138 "traddr": "10.0.0.1", 00:20:15.138 "trsvcid": "39772" 00:20:15.138 }, 00:20:15.138 "auth": { 00:20:15.138 "state": "completed", 00:20:15.138 "digest": "sha256", 00:20:15.138 "dhgroup": "ffdhe4096" 00:20:15.138 } 00:20:15.138 } 00:20:15.138 ]' 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.138 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.398 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:15.398 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.966 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:15.966 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.225 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.483 00:20:16.483 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.483 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.484 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.742 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.742 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.743 { 00:20:16.743 "cntlid": 29, 00:20:16.743 "qid": 0, 00:20:16.743 "state": "enabled", 00:20:16.743 "thread": "nvmf_tgt_poll_group_000", 00:20:16.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.743 "listen_address": { 00:20:16.743 "trtype": "TCP", 00:20:16.743 "adrfam": "IPv4", 00:20:16.743 "traddr": "10.0.0.2", 00:20:16.743 "trsvcid": "4420" 00:20:16.743 }, 00:20:16.743 "peer_address": { 00:20:16.743 "trtype": "TCP", 00:20:16.743 "adrfam": "IPv4", 00:20:16.743 "traddr": "10.0.0.1", 00:20:16.743 "trsvcid": "41480" 00:20:16.743 }, 00:20:16.743 "auth": { 00:20:16.743 "state": "completed", 00:20:16.743 "digest": "sha256", 00:20:16.743 "dhgroup": "ffdhe4096" 00:20:16.743 } 00:20:16.743 } 00:20:16.743 ]' 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.743 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.001 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:17.002 06:26:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.569 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.569 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:17.828 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:18.087 00:20:18.087 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.087 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.088 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.088 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.088 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.088 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.088 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.347 { 00:20:18.347 "cntlid": 31, 00:20:18.347 "qid": 0, 00:20:18.347 "state": "enabled", 00:20:18.347 "thread": "nvmf_tgt_poll_group_000", 00:20:18.347 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.347 "listen_address": { 00:20:18.347 "trtype": "TCP", 00:20:18.347 "adrfam": "IPv4", 00:20:18.347 "traddr": "10.0.0.2", 00:20:18.347 "trsvcid": "4420" 00:20:18.347 }, 00:20:18.347 "peer_address": { 00:20:18.347 "trtype": "TCP", 00:20:18.347 "adrfam": "IPv4", 00:20:18.347 "traddr": "10.0.0.1", 00:20:18.347 "trsvcid": "41508" 00:20:18.347 }, 00:20:18.347 "auth": { 00:20:18.347 "state": "completed", 00:20:18.347 "digest": "sha256", 00:20:18.347 "dhgroup": "ffdhe4096" 00:20:18.347 } 00:20:18.347 } 00:20:18.347 ]' 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.347 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.606 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:18.606 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.173 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.174 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.433 06:26:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.691 00:20:19.691 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.691 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.691 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.949 { 00:20:19.949 "cntlid": 33, 00:20:19.949 "qid": 0, 00:20:19.949 "state": "enabled", 00:20:19.949 "thread": "nvmf_tgt_poll_group_000", 00:20:19.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:19.949 "listen_address": { 00:20:19.949 "trtype": "TCP", 00:20:19.949 "adrfam": "IPv4", 00:20:19.949 "traddr": "10.0.0.2", 00:20:19.949 "trsvcid": "4420" 00:20:19.949 }, 00:20:19.949 "peer_address": { 00:20:19.949 "trtype": "TCP", 00:20:19.949 "adrfam": "IPv4", 00:20:19.949 "traddr": "10.0.0.1", 00:20:19.949 "trsvcid": "41530" 00:20:19.949 }, 00:20:19.949 "auth": { 00:20:19.949 "state": "completed", 00:20:19.949 "digest": "sha256", 00:20:19.949 "dhgroup": "ffdhe6144" 00:20:19.949 } 00:20:19.949 } 00:20:19.949 ]' 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.949 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.950 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.208 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:20.208 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.774 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:20.774 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.033 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.292 00:20:21.292 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.292 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.292 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.550 { 00:20:21.550 "cntlid": 35, 00:20:21.550 "qid": 0, 00:20:21.550 "state": "enabled", 00:20:21.550 "thread": "nvmf_tgt_poll_group_000", 00:20:21.550 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.550 "listen_address": { 00:20:21.550 "trtype": "TCP", 00:20:21.550 "adrfam": "IPv4", 00:20:21.550 "traddr": "10.0.0.2", 00:20:21.550 "trsvcid": "4420" 00:20:21.550 }, 00:20:21.550 "peer_address": { 00:20:21.550 "trtype": "TCP", 00:20:21.550 "adrfam": "IPv4", 00:20:21.550 "traddr": "10.0.0.1", 00:20:21.550 "trsvcid": "41556" 00:20:21.550 }, 00:20:21.550 "auth": { 00:20:21.550 "state": "completed", 00:20:21.550 "digest": "sha256", 00:20:21.550 "dhgroup": "ffdhe6144" 00:20:21.550 } 00:20:21.550 } 00:20:21.550 ]' 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.550 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.808 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:21.808 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.375 06:26:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.634 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:22.893 00:20:22.893 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.893 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.893 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.152 { 00:20:23.152 "cntlid": 37, 00:20:23.152 "qid": 0, 00:20:23.152 "state": "enabled", 00:20:23.152 "thread": "nvmf_tgt_poll_group_000", 00:20:23.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.152 "listen_address": { 00:20:23.152 "trtype": "TCP", 00:20:23.152 "adrfam": "IPv4", 00:20:23.152 "traddr": "10.0.0.2", 00:20:23.152 "trsvcid": "4420" 00:20:23.152 }, 00:20:23.152 "peer_address": { 00:20:23.152 "trtype": "TCP", 00:20:23.152 "adrfam": "IPv4", 00:20:23.152 "traddr": "10.0.0.1", 00:20:23.152 "trsvcid": "41572" 00:20:23.152 }, 00:20:23.152 "auth": { 00:20:23.152 "state": "completed", 00:20:23.152 "digest": "sha256", 00:20:23.152 "dhgroup": "ffdhe6144" 00:20:23.152 } 00:20:23.152 } 00:20:23.152 ]' 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.152 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.410 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.410 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.410 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.410 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.410 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.674 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:23.674 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:24.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.240 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:24.807 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.807 { 00:20:24.807 "cntlid": 39, 00:20:24.807 "qid": 0, 00:20:24.807 "state": "enabled", 00:20:24.807 "thread": "nvmf_tgt_poll_group_000", 00:20:24.807 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.807 "listen_address": { 00:20:24.807 "trtype": "TCP", 00:20:24.807 "adrfam": "IPv4", 00:20:24.807 "traddr": "10.0.0.2", 00:20:24.807 "trsvcid": "4420" 00:20:24.807 }, 00:20:24.807 "peer_address": { 00:20:24.807 "trtype": "TCP", 00:20:24.807 "adrfam": "IPv4", 00:20:24.807 "traddr": "10.0.0.1", 00:20:24.807 "trsvcid": "41592" 00:20:24.807 }, 00:20:24.807 "auth": { 00:20:24.807 "state": "completed", 00:20:24.807 "digest": "sha256", 00:20:24.807 "dhgroup": "ffdhe6144" 00:20:24.807 } 00:20:24.807 } 00:20:24.807 ]' 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.807 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:25.066 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:25.066 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:25.066 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.066 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.066 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.324 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:25.325 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.892 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:25.892 06:26:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:26.459 00:20:26.459 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.459 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.460 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.719 { 00:20:26.719 "cntlid": 41, 00:20:26.719 "qid": 0, 00:20:26.719 "state": "enabled", 00:20:26.719 "thread": "nvmf_tgt_poll_group_000", 00:20:26.719 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.719 "listen_address": { 00:20:26.719 "trtype": "TCP", 00:20:26.719 "adrfam": "IPv4", 00:20:26.719 "traddr": "10.0.0.2", 00:20:26.719 "trsvcid": "4420" 00:20:26.719 }, 00:20:26.719 "peer_address": { 00:20:26.719 "trtype": "TCP", 00:20:26.719 "adrfam": "IPv4", 00:20:26.719 "traddr": "10.0.0.1", 00:20:26.719 "trsvcid": "46888" 00:20:26.719 }, 00:20:26.719 "auth": { 00:20:26.719 "state": "completed", 00:20:26.719 "digest": "sha256", 00:20:26.719 "dhgroup": "ffdhe8192" 00:20:26.719 } 00:20:26.719 } 00:20:26.719 ]' 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.719 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.977 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:26.977 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.545 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:27.804 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:28.371 00:20:28.371 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.371 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:28.371 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:28.629 { 00:20:28.629 "cntlid": 43, 00:20:28.629 "qid": 0, 00:20:28.629 "state": "enabled", 00:20:28.629 "thread": "nvmf_tgt_poll_group_000", 00:20:28.629 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:28.629 "listen_address": { 00:20:28.629 "trtype": "TCP", 00:20:28.629 "adrfam": "IPv4", 00:20:28.629 "traddr": "10.0.0.2", 00:20:28.629 "trsvcid": "4420" 00:20:28.629 }, 00:20:28.629 "peer_address": { 00:20:28.629 "trtype": "TCP", 00:20:28.629 "adrfam": "IPv4", 00:20:28.629 "traddr": "10.0.0.1", 00:20:28.629 "trsvcid": "46912" 00:20:28.629 }, 00:20:28.629 "auth": { 00:20:28.629 "state": "completed", 00:20:28.629 "digest": "sha256", 00:20:28.629 "dhgroup": "ffdhe8192" 00:20:28.629 } 00:20:28.629 } 00:20:28.629 ]' 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:28.629 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.630 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.630 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.887 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:28.888 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.454 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.454 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:29.713 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:30.281 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:30.281 { 00:20:30.281 "cntlid": 45, 00:20:30.281 "qid": 0, 00:20:30.281 "state": "enabled", 00:20:30.281 "thread": "nvmf_tgt_poll_group_000", 00:20:30.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:30.281 "listen_address": { 00:20:30.281 "trtype": "TCP", 00:20:30.281 "adrfam": "IPv4", 00:20:30.281 "traddr": "10.0.0.2", 00:20:30.281 "trsvcid": "4420" 00:20:30.281 }, 00:20:30.281 "peer_address": { 00:20:30.281 "trtype": "TCP", 00:20:30.281 "adrfam": "IPv4", 00:20:30.281 "traddr": "10.0.0.1", 00:20:30.281 "trsvcid": "46944" 00:20:30.281 }, 00:20:30.281 "auth": { 00:20:30.281 "state": "completed", 00:20:30.281 "digest": "sha256", 00:20:30.281 "dhgroup": "ffdhe8192" 00:20:30.281 } 00:20:30.281 } 00:20:30.281 ]' 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:30.281 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:30.540 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:30.540 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:30.540 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.540 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.540 06:26:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:30.798 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:30.798 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.366 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:31.367 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.367 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.934 00:20:31.934 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.934 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.934 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.193 { 00:20:32.193 "cntlid": 47, 00:20:32.193 "qid": 0, 00:20:32.193 "state": "enabled", 00:20:32.193 "thread": "nvmf_tgt_poll_group_000", 00:20:32.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.193 "listen_address": { 00:20:32.193 "trtype": "TCP", 00:20:32.193 "adrfam": "IPv4", 00:20:32.193 "traddr": "10.0.0.2", 00:20:32.193 "trsvcid": "4420" 00:20:32.193 }, 00:20:32.193 "peer_address": { 00:20:32.193 "trtype": "TCP", 00:20:32.193 "adrfam": "IPv4", 00:20:32.193 "traddr": "10.0.0.1", 00:20:32.193 "trsvcid": "46972" 00:20:32.193 }, 00:20:32.193 "auth": { 00:20:32.193 "state": "completed", 00:20:32.193 "digest": "sha256", 00:20:32.193 "dhgroup": "ffdhe8192" 00:20:32.193 } 00:20:32.193 } 00:20:32.193 ]' 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.193 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.452 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:32.452 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.020 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.020 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.279 06:26:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.537 00:20:33.537 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.537 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.537 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.798 { 00:20:33.798 "cntlid": 49, 00:20:33.798 "qid": 0, 00:20:33.798 "state": "enabled", 00:20:33.798 "thread": "nvmf_tgt_poll_group_000", 00:20:33.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:33.798 "listen_address": { 00:20:33.798 "trtype": "TCP", 00:20:33.798 "adrfam": "IPv4", 00:20:33.798 "traddr": "10.0.0.2", 00:20:33.798 "trsvcid": "4420" 00:20:33.798 }, 00:20:33.798 "peer_address": { 00:20:33.798 "trtype": "TCP", 00:20:33.798 "adrfam": "IPv4", 00:20:33.798 "traddr": "10.0.0.1", 00:20:33.798 "trsvcid": "47008" 00:20:33.798 }, 00:20:33.798 "auth": { 00:20:33.798 "state": "completed", 00:20:33.798 "digest": "sha384", 00:20:33.798 "dhgroup": "null" 00:20:33.798 } 00:20:33.798 } 00:20:33.798 ]' 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.798 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.144 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:34.144 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:34.769 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.027 00:20:35.027 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.027 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.027 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.285 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.285 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.286 { 00:20:35.286 "cntlid": 51, 00:20:35.286 "qid": 0, 00:20:35.286 "state": "enabled", 00:20:35.286 "thread": "nvmf_tgt_poll_group_000", 00:20:35.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.286 "listen_address": { 00:20:35.286 "trtype": "TCP", 00:20:35.286 "adrfam": "IPv4", 00:20:35.286 "traddr": "10.0.0.2", 00:20:35.286 "trsvcid": "4420" 00:20:35.286 }, 00:20:35.286 "peer_address": { 00:20:35.286 "trtype": "TCP", 00:20:35.286 "adrfam": "IPv4", 00:20:35.286 "traddr": "10.0.0.1", 00:20:35.286 "trsvcid": "35794" 00:20:35.286 }, 00:20:35.286 "auth": { 00:20:35.286 "state": "completed", 00:20:35.286 "digest": "sha384", 00:20:35.286 "dhgroup": "null" 00:20:35.286 } 00:20:35.286 } 00:20:35.286 ]' 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.286 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.544 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:35.544 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.111 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.111 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.370 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:36.628 00:20:36.628 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.628 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.628 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.887 { 00:20:36.887 "cntlid": 53, 00:20:36.887 "qid": 0, 00:20:36.887 "state": "enabled", 00:20:36.887 "thread": "nvmf_tgt_poll_group_000", 00:20:36.887 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:36.887 "listen_address": { 00:20:36.887 "trtype": "TCP", 00:20:36.887 "adrfam": "IPv4", 00:20:36.887 "traddr": "10.0.0.2", 00:20:36.887 "trsvcid": "4420" 00:20:36.887 }, 00:20:36.887 "peer_address": { 00:20:36.887 "trtype": "TCP", 00:20:36.887 "adrfam": "IPv4", 00:20:36.887 "traddr": "10.0.0.1", 00:20:36.887 "trsvcid": "35822" 00:20:36.887 }, 00:20:36.887 "auth": { 00:20:36.887 "state": "completed", 00:20:36.887 "digest": "sha384", 00:20:36.887 "dhgroup": "null" 00:20:36.887 } 00:20:36.887 } 00:20:36.887 ]' 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.887 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.146 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:37.146 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:37.712 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:37.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:37.712 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:37.712 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.712 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.713 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.713 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:37.713 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.713 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:37.971 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:38.230 00:20:38.230 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.230 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.230 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:38.488 { 00:20:38.488 "cntlid": 55, 00:20:38.488 "qid": 0, 00:20:38.488 "state": "enabled", 00:20:38.488 "thread": "nvmf_tgt_poll_group_000", 00:20:38.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:38.488 "listen_address": { 00:20:38.488 "trtype": "TCP", 00:20:38.488 "adrfam": "IPv4", 00:20:38.488 "traddr": "10.0.0.2", 00:20:38.488 "trsvcid": "4420" 00:20:38.488 }, 00:20:38.488 "peer_address": { 00:20:38.488 "trtype": "TCP", 00:20:38.488 "adrfam": "IPv4", 00:20:38.488 "traddr": "10.0.0.1", 00:20:38.488 "trsvcid": "35860" 00:20:38.488 }, 00:20:38.488 "auth": { 00:20:38.488 "state": "completed", 00:20:38.488 "digest": "sha384", 00:20:38.488 "dhgroup": "null" 00:20:38.488 } 00:20:38.488 } 00:20:38.488 ]' 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:38.488 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:38.488 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:38.488 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:38.488 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:38.488 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:38.488 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:38.746 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:38.746 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.313 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.313 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.572 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:39.831 00:20:39.831 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.831 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.831 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.089 { 00:20:40.089 "cntlid": 57, 00:20:40.089 "qid": 0, 00:20:40.089 "state": "enabled", 00:20:40.089 "thread": "nvmf_tgt_poll_group_000", 00:20:40.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.089 "listen_address": { 00:20:40.089 "trtype": "TCP", 00:20:40.089 "adrfam": "IPv4", 00:20:40.089 "traddr": "10.0.0.2", 00:20:40.089 "trsvcid": "4420" 00:20:40.089 }, 00:20:40.089 "peer_address": { 00:20:40.089 "trtype": "TCP", 00:20:40.089 "adrfam": "IPv4", 00:20:40.089 "traddr": "10.0.0.1", 00:20:40.089 "trsvcid": "35894" 00:20:40.089 }, 00:20:40.089 "auth": { 00:20:40.089 "state": "completed", 00:20:40.089 "digest": "sha384", 00:20:40.089 "dhgroup": "ffdhe2048" 00:20:40.089 } 00:20:40.089 } 00:20:40.089 ]' 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:40.089 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.348 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:40.348 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:40.914 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.173 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:41.432 00:20:41.432 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:41.432 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:41.432 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:41.690 { 00:20:41.690 "cntlid": 59, 00:20:41.690 "qid": 0, 00:20:41.690 "state": "enabled", 00:20:41.690 "thread": "nvmf_tgt_poll_group_000", 00:20:41.690 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:41.690 "listen_address": { 00:20:41.690 "trtype": "TCP", 00:20:41.690 "adrfam": "IPv4", 00:20:41.690 "traddr": "10.0.0.2", 00:20:41.690 "trsvcid": "4420" 00:20:41.690 }, 00:20:41.690 "peer_address": { 00:20:41.690 "trtype": "TCP", 00:20:41.690 "adrfam": "IPv4", 00:20:41.690 "traddr": "10.0.0.1", 00:20:41.690 "trsvcid": "35924" 00:20:41.690 }, 00:20:41.690 "auth": { 00:20:41.690 "state": "completed", 00:20:41.690 "digest": "sha384", 00:20:41.690 "dhgroup": "ffdhe2048" 00:20:41.690 } 00:20:41.690 } 00:20:41.690 ]' 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:41.690 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.691 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.691 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.691 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.949 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:41.949 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:42.515 06:26:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:42.515 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.515 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:42.774 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:43.032 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:43.032 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.033 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:43.291 { 00:20:43.291 "cntlid": 61, 00:20:43.291 "qid": 0, 00:20:43.291 "state": "enabled", 00:20:43.291 "thread": "nvmf_tgt_poll_group_000", 00:20:43.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:43.291 "listen_address": { 00:20:43.291 "trtype": "TCP", 00:20:43.291 "adrfam": "IPv4", 00:20:43.291 "traddr": "10.0.0.2", 00:20:43.291 "trsvcid": "4420" 00:20:43.291 }, 00:20:43.291 "peer_address": { 00:20:43.291 "trtype": "TCP", 00:20:43.291 "adrfam": "IPv4", 00:20:43.291 "traddr": "10.0.0.1", 00:20:43.291 "trsvcid": "35942" 00:20:43.291 }, 00:20:43.291 "auth": { 00:20:43.291 "state": "completed", 00:20:43.291 "digest": "sha384", 00:20:43.291 "dhgroup": "ffdhe2048" 00:20:43.291 } 00:20:43.291 } 00:20:43.291 ]' 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:43.291 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:43.550 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:43.550 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:44.116 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.116 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.375 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:44.634 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.634 { 00:20:44.634 "cntlid": 63, 00:20:44.634 "qid": 0, 00:20:44.634 "state": "enabled", 00:20:44.634 "thread": "nvmf_tgt_poll_group_000", 00:20:44.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.634 "listen_address": { 00:20:44.634 "trtype": "TCP", 00:20:44.634 "adrfam": "IPv4", 00:20:44.634 "traddr": "10.0.0.2", 00:20:44.634 "trsvcid": "4420" 00:20:44.634 }, 00:20:44.634 "peer_address": { 00:20:44.634 "trtype": "TCP", 00:20:44.634 "adrfam": "IPv4", 00:20:44.634 "traddr": "10.0.0.1", 00:20:44.634 "trsvcid": "35966" 00:20:44.634 }, 00:20:44.634 "auth": { 00:20:44.634 "state": "completed", 00:20:44.634 "digest": "sha384", 00:20:44.634 "dhgroup": "ffdhe2048" 00:20:44.634 } 00:20:44.634 } 00:20:44.634 ]' 00:20:44.634 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.893 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.152 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:45.152 06:26:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.719 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.978 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:46.237 { 00:20:46.237 "cntlid": 65, 00:20:46.237 "qid": 0, 00:20:46.237 "state": "enabled", 00:20:46.237 "thread": "nvmf_tgt_poll_group_000", 00:20:46.237 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:46.237 "listen_address": { 00:20:46.237 "trtype": "TCP", 00:20:46.237 "adrfam": "IPv4", 00:20:46.237 "traddr": "10.0.0.2", 00:20:46.237 "trsvcid": "4420" 00:20:46.237 }, 00:20:46.237 "peer_address": { 00:20:46.237 "trtype": "TCP", 00:20:46.237 "adrfam": "IPv4", 00:20:46.237 "traddr": "10.0.0.1", 00:20:46.237 "trsvcid": "44784" 00:20:46.237 }, 00:20:46.237 "auth": { 00:20:46.237 "state": "completed", 00:20:46.237 "digest": "sha384", 00:20:46.237 "dhgroup": "ffdhe3072" 00:20:46.237 } 00:20:46.237 } 00:20:46.237 ]' 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.237 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.495 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.753 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:46.753 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.321 06:26:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.580 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.838 { 00:20:47.838 "cntlid": 67, 00:20:47.838 "qid": 0, 00:20:47.838 "state": "enabled", 00:20:47.838 "thread": "nvmf_tgt_poll_group_000", 00:20:47.838 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.838 "listen_address": { 00:20:47.838 "trtype": "TCP", 00:20:47.838 "adrfam": "IPv4", 00:20:47.838 "traddr": "10.0.0.2", 00:20:47.838 "trsvcid": "4420" 00:20:47.838 }, 00:20:47.838 "peer_address": { 00:20:47.838 "trtype": "TCP", 00:20:47.838 "adrfam": "IPv4", 00:20:47.838 "traddr": "10.0.0.1", 00:20:47.838 "trsvcid": "44812" 00:20:47.838 }, 00:20:47.838 "auth": { 00:20:47.838 "state": "completed", 00:20:47.838 "digest": "sha384", 00:20:47.838 "dhgroup": "ffdhe3072" 00:20:47.838 } 00:20:47.838 } 00:20:47.838 ]' 00:20:47.838 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.839 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:47.839 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.097 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:48.097 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.097 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.097 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.097 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.355 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:48.355 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.922 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:48.922 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:48.923 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:49.181 00:20:49.440 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.440 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.440 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.440 { 00:20:49.440 "cntlid": 69, 00:20:49.440 "qid": 0, 00:20:49.440 "state": "enabled", 00:20:49.440 "thread": "nvmf_tgt_poll_group_000", 00:20:49.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.440 "listen_address": { 00:20:49.440 "trtype": "TCP", 00:20:49.440 "adrfam": "IPv4", 00:20:49.440 "traddr": "10.0.0.2", 00:20:49.440 "trsvcid": "4420" 00:20:49.440 }, 00:20:49.440 "peer_address": { 00:20:49.440 "trtype": "TCP", 00:20:49.440 "adrfam": "IPv4", 00:20:49.440 "traddr": "10.0.0.1", 00:20:49.440 "trsvcid": "44846" 00:20:49.440 }, 00:20:49.440 "auth": { 00:20:49.440 "state": "completed", 00:20:49.440 "digest": "sha384", 00:20:49.440 "dhgroup": "ffdhe3072" 00:20:49.440 } 00:20:49.440 } 00:20:49.440 ]' 00:20:49.440 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.699 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:49.957 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:49.957 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.524 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.783 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.783 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.042 { 00:20:51.042 "cntlid": 71, 00:20:51.042 "qid": 0, 00:20:51.042 "state": "enabled", 00:20:51.042 "thread": "nvmf_tgt_poll_group_000", 00:20:51.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.042 "listen_address": { 00:20:51.042 "trtype": "TCP", 00:20:51.042 "adrfam": "IPv4", 00:20:51.042 "traddr": "10.0.0.2", 00:20:51.042 "trsvcid": "4420" 00:20:51.042 }, 00:20:51.042 "peer_address": { 00:20:51.042 "trtype": "TCP", 00:20:51.042 "adrfam": "IPv4", 00:20:51.042 "traddr": "10.0.0.1", 00:20:51.042 "trsvcid": "44886" 00:20:51.042 }, 00:20:51.042 "auth": { 00:20:51.042 "state": "completed", 00:20:51.042 "digest": "sha384", 00:20:51.042 "dhgroup": "ffdhe3072" 00:20:51.042 } 00:20:51.042 } 00:20:51.042 ]' 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.042 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.301 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:51.301 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.301 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.301 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.301 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.559 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:51.559 06:26:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.126 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.126 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:52.385 00:20:52.385 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.385 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.385 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.643 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.643 { 00:20:52.643 "cntlid": 73, 00:20:52.643 "qid": 0, 00:20:52.644 "state": "enabled", 00:20:52.644 "thread": "nvmf_tgt_poll_group_000", 00:20:52.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.644 "listen_address": { 00:20:52.644 "trtype": "TCP", 00:20:52.644 "adrfam": "IPv4", 00:20:52.644 "traddr": "10.0.0.2", 00:20:52.644 "trsvcid": "4420" 00:20:52.644 }, 00:20:52.644 "peer_address": { 00:20:52.644 "trtype": "TCP", 00:20:52.644 "adrfam": "IPv4", 00:20:52.644 "traddr": "10.0.0.1", 00:20:52.644 "trsvcid": "44920" 00:20:52.644 }, 00:20:52.644 "auth": { 00:20:52.644 "state": "completed", 00:20:52.644 "digest": "sha384", 00:20:52.644 "dhgroup": "ffdhe4096" 00:20:52.644 } 00:20:52.644 } 00:20:52.644 ]' 00:20:52.644 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.644 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.644 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.902 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:52.902 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.902 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.902 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.902 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.160 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:53.160 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.726 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.726 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:53.984 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:54.242 00:20:54.242 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.242 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.242 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.501 { 00:20:54.501 "cntlid": 75, 00:20:54.501 "qid": 0, 00:20:54.501 "state": "enabled", 00:20:54.501 "thread": "nvmf_tgt_poll_group_000", 00:20:54.501 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.501 "listen_address": { 00:20:54.501 "trtype": "TCP", 00:20:54.501 "adrfam": "IPv4", 00:20:54.501 "traddr": "10.0.0.2", 00:20:54.501 "trsvcid": "4420" 00:20:54.501 }, 00:20:54.501 "peer_address": { 00:20:54.501 "trtype": "TCP", 00:20:54.501 "adrfam": "IPv4", 00:20:54.501 "traddr": "10.0.0.1", 00:20:54.501 "trsvcid": "44944" 00:20:54.501 }, 00:20:54.501 "auth": { 00:20:54.501 "state": "completed", 00:20:54.501 "digest": "sha384", 00:20:54.501 "dhgroup": "ffdhe4096" 00:20:54.501 } 00:20:54.501 } 00:20:54.501 ]' 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.501 06:26:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.501 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:54.501 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.501 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.501 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.501 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.759 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:54.759 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.326 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.585 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:55.843 00:20:55.843 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.843 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.843 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:56.102 { 00:20:56.102 "cntlid": 77, 00:20:56.102 "qid": 0, 00:20:56.102 "state": "enabled", 00:20:56.102 "thread": "nvmf_tgt_poll_group_000", 00:20:56.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:56.102 "listen_address": { 00:20:56.102 "trtype": "TCP", 00:20:56.102 "adrfam": "IPv4", 00:20:56.102 "traddr": "10.0.0.2", 00:20:56.102 "trsvcid": "4420" 00:20:56.102 }, 00:20:56.102 "peer_address": { 00:20:56.102 "trtype": "TCP", 00:20:56.102 "adrfam": "IPv4", 00:20:56.102 "traddr": "10.0.0.1", 00:20:56.102 "trsvcid": "53548" 00:20:56.102 }, 00:20:56.102 "auth": { 00:20:56.102 "state": "completed", 00:20:56.102 "digest": "sha384", 00:20:56.102 "dhgroup": "ffdhe4096" 00:20:56.102 } 00:20:56.102 } 00:20:56.102 ]' 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.102 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.360 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:56.360 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:56.927 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:57.185 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:20:57.185 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:57.185 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:57.185 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:57.185 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.186 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:57.444 00:20:57.444 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.444 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.444 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.703 { 00:20:57.703 "cntlid": 79, 00:20:57.703 "qid": 0, 00:20:57.703 "state": "enabled", 00:20:57.703 "thread": "nvmf_tgt_poll_group_000", 00:20:57.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.703 "listen_address": { 00:20:57.703 "trtype": "TCP", 00:20:57.703 "adrfam": "IPv4", 00:20:57.703 "traddr": "10.0.0.2", 00:20:57.703 "trsvcid": "4420" 00:20:57.703 }, 00:20:57.703 "peer_address": { 00:20:57.703 "trtype": "TCP", 00:20:57.703 "adrfam": "IPv4", 00:20:57.703 "traddr": "10.0.0.1", 00:20:57.703 "trsvcid": "53584" 00:20:57.703 }, 00:20:57.703 "auth": { 00:20:57.703 "state": "completed", 00:20:57.703 "digest": "sha384", 00:20:57.703 "dhgroup": "ffdhe4096" 00:20:57.703 } 00:20:57.703 } 00:20:57.703 ]' 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.703 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:57.962 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:57.962 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.528 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.786 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.787 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.787 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:58.787 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:59.045 00:20:59.045 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:59.045 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.045 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.304 { 00:20:59.304 "cntlid": 81, 00:20:59.304 "qid": 0, 00:20:59.304 "state": "enabled", 00:20:59.304 "thread": "nvmf_tgt_poll_group_000", 00:20:59.304 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.304 "listen_address": { 00:20:59.304 "trtype": "TCP", 00:20:59.304 "adrfam": "IPv4", 00:20:59.304 "traddr": "10.0.0.2", 00:20:59.304 "trsvcid": "4420" 00:20:59.304 }, 00:20:59.304 "peer_address": { 00:20:59.304 "trtype": "TCP", 00:20:59.304 "adrfam": "IPv4", 00:20:59.304 "traddr": "10.0.0.1", 00:20:59.304 "trsvcid": "53606" 00:20:59.304 }, 00:20:59.304 "auth": { 00:20:59.304 "state": "completed", 00:20:59.304 "digest": "sha384", 00:20:59.304 "dhgroup": "ffdhe6144" 00:20:59.304 } 00:20:59.304 } 00:20:59.304 ]' 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.304 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.562 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:20:59.562 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:00.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.129 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.387 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.388 06:26:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:00.646 00:21:00.646 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.646 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.646 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.904 { 00:21:00.904 "cntlid": 83, 00:21:00.904 "qid": 0, 00:21:00.904 "state": "enabled", 00:21:00.904 "thread": "nvmf_tgt_poll_group_000", 00:21:00.904 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.904 "listen_address": { 00:21:00.904 "trtype": "TCP", 00:21:00.904 "adrfam": "IPv4", 00:21:00.904 "traddr": "10.0.0.2", 00:21:00.904 "trsvcid": "4420" 00:21:00.904 }, 00:21:00.904 "peer_address": { 00:21:00.904 "trtype": "TCP", 00:21:00.904 "adrfam": "IPv4", 00:21:00.904 "traddr": "10.0.0.1", 00:21:00.904 "trsvcid": "53650" 00:21:00.904 }, 00:21:00.904 "auth": { 00:21:00.904 "state": "completed", 00:21:00.904 "digest": "sha384", 00:21:00.904 "dhgroup": "ffdhe6144" 00:21:00.904 } 00:21:00.904 } 00:21:00.904 ]' 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.904 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:01.163 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:01.163 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:01.163 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:01.163 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.163 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.421 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:01.421 06:26:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.987 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:01.987 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:02.554 00:21:02.554 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.554 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.554 06:26:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.554 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.555 { 00:21:02.555 "cntlid": 85, 00:21:02.555 "qid": 0, 00:21:02.555 "state": "enabled", 00:21:02.555 "thread": "nvmf_tgt_poll_group_000", 00:21:02.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.555 "listen_address": { 00:21:02.555 "trtype": "TCP", 00:21:02.555 "adrfam": "IPv4", 00:21:02.555 "traddr": "10.0.0.2", 00:21:02.555 "trsvcid": "4420" 00:21:02.555 }, 00:21:02.555 "peer_address": { 00:21:02.555 "trtype": "TCP", 00:21:02.555 "adrfam": "IPv4", 00:21:02.555 "traddr": "10.0.0.1", 00:21:02.555 "trsvcid": "53684" 00:21:02.555 }, 00:21:02.555 "auth": { 00:21:02.555 "state": "completed", 00:21:02.555 "digest": "sha384", 00:21:02.555 "dhgroup": "ffdhe6144" 00:21:02.555 } 00:21:02.555 } 00:21:02.555 ]' 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.555 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.813 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.071 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:03.071 06:26:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.637 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:03.637 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:04.202 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:04.202 { 00:21:04.202 "cntlid": 87, 00:21:04.202 "qid": 0, 00:21:04.202 "state": "enabled", 00:21:04.202 "thread": "nvmf_tgt_poll_group_000", 00:21:04.202 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:04.202 "listen_address": { 00:21:04.202 "trtype": "TCP", 00:21:04.202 "adrfam": "IPv4", 00:21:04.202 "traddr": "10.0.0.2", 00:21:04.202 "trsvcid": "4420" 00:21:04.202 }, 00:21:04.202 "peer_address": { 00:21:04.202 "trtype": "TCP", 00:21:04.202 "adrfam": "IPv4", 00:21:04.202 "traddr": "10.0.0.1", 00:21:04.202 "trsvcid": "53704" 00:21:04.202 }, 00:21:04.202 "auth": { 00:21:04.202 "state": "completed", 00:21:04.202 "digest": "sha384", 00:21:04.202 "dhgroup": "ffdhe6144" 00:21:04.202 } 00:21:04.202 } 00:21:04.202 ]' 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:04.202 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:04.461 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:04.461 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:04.461 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:04.461 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:04.461 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.719 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:04.719 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:05.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.286 06:26:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.853 00:21:05.853 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.853 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.853 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:06.112 { 00:21:06.112 "cntlid": 89, 00:21:06.112 "qid": 0, 00:21:06.112 "state": "enabled", 00:21:06.112 "thread": "nvmf_tgt_poll_group_000", 00:21:06.112 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:06.112 "listen_address": { 00:21:06.112 "trtype": "TCP", 00:21:06.112 "adrfam": "IPv4", 00:21:06.112 "traddr": "10.0.0.2", 00:21:06.112 "trsvcid": "4420" 00:21:06.112 }, 00:21:06.112 "peer_address": { 00:21:06.112 "trtype": "TCP", 00:21:06.112 "adrfam": "IPv4", 00:21:06.112 "traddr": "10.0.0.1", 00:21:06.112 "trsvcid": "49922" 00:21:06.112 }, 00:21:06.112 "auth": { 00:21:06.112 "state": "completed", 00:21:06.112 "digest": "sha384", 00:21:06.112 "dhgroup": "ffdhe8192" 00:21:06.112 } 00:21:06.112 } 00:21:06.112 ]' 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:06.112 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:06.371 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:06.371 06:26:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.938 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.938 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.196 06:26:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.763 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.763 { 00:21:07.763 "cntlid": 91, 00:21:07.763 "qid": 0, 00:21:07.763 "state": "enabled", 00:21:07.763 "thread": "nvmf_tgt_poll_group_000", 00:21:07.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.763 "listen_address": { 00:21:07.763 "trtype": "TCP", 00:21:07.763 "adrfam": "IPv4", 00:21:07.763 "traddr": "10.0.0.2", 00:21:07.763 "trsvcid": "4420" 00:21:07.763 }, 00:21:07.763 "peer_address": { 00:21:07.763 "trtype": "TCP", 00:21:07.763 "adrfam": "IPv4", 00:21:07.763 "traddr": "10.0.0.1", 00:21:07.763 "trsvcid": "49964" 00:21:07.763 }, 00:21:07.763 "auth": { 00:21:07.763 "state": "completed", 00:21:07.763 "digest": "sha384", 00:21:07.763 "dhgroup": "ffdhe8192" 00:21:07.763 } 00:21:07.763 } 00:21:07.763 ]' 00:21:07.763 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.021 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.280 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:08.280 06:26:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:08.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.846 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.104 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.104 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.104 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.104 06:27:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:09.362 00:21:09.362 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:09.362 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:09.362 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:09.621 { 00:21:09.621 "cntlid": 93, 00:21:09.621 "qid": 0, 00:21:09.621 "state": "enabled", 00:21:09.621 "thread": "nvmf_tgt_poll_group_000", 00:21:09.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:09.621 "listen_address": { 00:21:09.621 "trtype": "TCP", 00:21:09.621 "adrfam": "IPv4", 00:21:09.621 "traddr": "10.0.0.2", 00:21:09.621 "trsvcid": "4420" 00:21:09.621 }, 00:21:09.621 "peer_address": { 00:21:09.621 "trtype": "TCP", 00:21:09.621 "adrfam": "IPv4", 00:21:09.621 "traddr": "10.0.0.1", 00:21:09.621 "trsvcid": "49994" 00:21:09.621 }, 00:21:09.621 "auth": { 00:21:09.621 "state": "completed", 00:21:09.621 "digest": "sha384", 00:21:09.621 "dhgroup": "ffdhe8192" 00:21:09.621 } 00:21:09.621 } 00:21:09.621 ]' 00:21:09.621 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:09.879 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.137 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:10.137 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:10.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:10.704 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:10.705 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:11.271 00:21:11.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.271 06:27:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.611 { 00:21:11.611 "cntlid": 95, 00:21:11.611 "qid": 0, 00:21:11.611 "state": "enabled", 00:21:11.611 "thread": "nvmf_tgt_poll_group_000", 00:21:11.611 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.611 "listen_address": { 00:21:11.611 "trtype": "TCP", 00:21:11.611 "adrfam": "IPv4", 00:21:11.611 "traddr": "10.0.0.2", 00:21:11.611 "trsvcid": "4420" 00:21:11.611 }, 00:21:11.611 "peer_address": { 00:21:11.611 "trtype": "TCP", 00:21:11.611 "adrfam": "IPv4", 00:21:11.611 "traddr": "10.0.0.1", 00:21:11.611 "trsvcid": "50026" 00:21:11.611 }, 00:21:11.611 "auth": { 00:21:11.611 "state": "completed", 00:21:11.611 "digest": "sha384", 00:21:11.611 "dhgroup": "ffdhe8192" 00:21:11.611 } 00:21:11.611 } 00:21:11.611 ]' 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.611 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.875 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:11.875 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:12.441 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.442 06:27:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.700 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:12.958 00:21:12.958 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:12.958 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:12.958 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.217 { 00:21:13.217 "cntlid": 97, 00:21:13.217 "qid": 0, 00:21:13.217 "state": "enabled", 00:21:13.217 "thread": "nvmf_tgt_poll_group_000", 00:21:13.217 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.217 "listen_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.2", 00:21:13.217 "trsvcid": "4420" 00:21:13.217 }, 00:21:13.217 "peer_address": { 00:21:13.217 "trtype": "TCP", 00:21:13.217 "adrfam": "IPv4", 00:21:13.217 "traddr": "10.0.0.1", 00:21:13.217 "trsvcid": "50046" 00:21:13.217 }, 00:21:13.217 "auth": { 00:21:13.217 "state": "completed", 00:21:13.217 "digest": "sha512", 00:21:13.217 "dhgroup": "null" 00:21:13.217 } 00:21:13.217 } 00:21:13.217 ]' 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.217 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.474 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:13.474 06:27:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.036 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.036 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.037 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.295 06:27:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:14.554 00:21:14.554 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.554 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.554 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.812 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.813 { 00:21:14.813 "cntlid": 99, 00:21:14.813 "qid": 0, 00:21:14.813 "state": "enabled", 00:21:14.813 "thread": "nvmf_tgt_poll_group_000", 00:21:14.813 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.813 "listen_address": { 00:21:14.813 "trtype": "TCP", 00:21:14.813 "adrfam": "IPv4", 00:21:14.813 "traddr": "10.0.0.2", 00:21:14.813 "trsvcid": "4420" 00:21:14.813 }, 00:21:14.813 "peer_address": { 00:21:14.813 "trtype": "TCP", 00:21:14.813 "adrfam": "IPv4", 00:21:14.813 "traddr": "10.0.0.1", 00:21:14.813 "trsvcid": "50074" 00:21:14.813 }, 00:21:14.813 "auth": { 00:21:14.813 "state": "completed", 00:21:14.813 "digest": "sha512", 00:21:14.813 "dhgroup": "null" 00:21:14.813 } 00:21:14.813 } 00:21:14.813 ]' 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:14.813 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.071 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:15.071 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:15.638 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.638 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.638 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.638 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.639 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.639 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.639 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.639 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.639 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:15.897 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:16.155 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.155 { 00:21:16.155 "cntlid": 101, 00:21:16.155 "qid": 0, 00:21:16.155 "state": "enabled", 00:21:16.155 "thread": "nvmf_tgt_poll_group_000", 00:21:16.155 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.155 "listen_address": { 00:21:16.155 "trtype": "TCP", 00:21:16.155 "adrfam": "IPv4", 00:21:16.155 "traddr": "10.0.0.2", 00:21:16.155 "trsvcid": "4420" 00:21:16.155 }, 00:21:16.155 "peer_address": { 00:21:16.155 "trtype": "TCP", 00:21:16.155 "adrfam": "IPv4", 00:21:16.155 "traddr": "10.0.0.1", 00:21:16.155 "trsvcid": "34942" 00:21:16.155 }, 00:21:16.155 "auth": { 00:21:16.155 "state": "completed", 00:21:16.155 "digest": "sha512", 00:21:16.155 "dhgroup": "null" 00:21:16.155 } 00:21:16.155 } 00:21:16.155 ]' 00:21:16.155 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.414 06:27:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.673 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:16.673 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.239 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.498 06:27:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:17.756 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.756 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:17.756 { 00:21:17.756 "cntlid": 103, 00:21:17.756 "qid": 0, 00:21:17.756 "state": "enabled", 00:21:17.757 "thread": "nvmf_tgt_poll_group_000", 00:21:17.757 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:17.757 "listen_address": { 00:21:17.757 "trtype": "TCP", 00:21:17.757 "adrfam": "IPv4", 00:21:17.757 "traddr": "10.0.0.2", 00:21:17.757 "trsvcid": "4420" 00:21:17.757 }, 00:21:17.757 "peer_address": { 00:21:17.757 "trtype": "TCP", 00:21:17.757 "adrfam": "IPv4", 00:21:17.757 "traddr": "10.0.0.1", 00:21:17.757 "trsvcid": "34968" 00:21:17.757 }, 00:21:17.757 "auth": { 00:21:17.757 "state": "completed", 00:21:17.757 "digest": "sha512", 00:21:17.757 "dhgroup": "null" 00:21:17.757 } 00:21:17.757 } 00:21:17.757 ]' 00:21:17.757 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.015 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.274 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:18.274 06:27:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:18.840 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:18.840 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.099 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:19.358 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.358 { 00:21:19.358 "cntlid": 105, 00:21:19.358 "qid": 0, 00:21:19.358 "state": "enabled", 00:21:19.358 "thread": "nvmf_tgt_poll_group_000", 00:21:19.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.358 "listen_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.2", 00:21:19.358 "trsvcid": "4420" 00:21:19.358 }, 00:21:19.358 "peer_address": { 00:21:19.358 "trtype": "TCP", 00:21:19.358 "adrfam": "IPv4", 00:21:19.358 "traddr": "10.0.0.1", 00:21:19.358 "trsvcid": "35006" 00:21:19.358 }, 00:21:19.358 "auth": { 00:21:19.358 "state": "completed", 00:21:19.358 "digest": "sha512", 00:21:19.358 "dhgroup": "ffdhe2048" 00:21:19.358 } 00:21:19.358 } 00:21:19.358 ]' 00:21:19.358 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:19.616 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:19.874 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:19.874 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.441 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.441 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:20.700 00:21:20.700 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:20.700 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:20.700 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:20.958 { 00:21:20.958 "cntlid": 107, 00:21:20.958 "qid": 0, 00:21:20.958 "state": "enabled", 00:21:20.958 "thread": "nvmf_tgt_poll_group_000", 00:21:20.958 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:20.958 "listen_address": { 00:21:20.958 "trtype": "TCP", 00:21:20.958 "adrfam": "IPv4", 00:21:20.958 "traddr": "10.0.0.2", 00:21:20.958 "trsvcid": "4420" 00:21:20.958 }, 00:21:20.958 "peer_address": { 00:21:20.958 "trtype": "TCP", 00:21:20.958 "adrfam": "IPv4", 00:21:20.958 "traddr": "10.0.0.1", 00:21:20.958 "trsvcid": "35034" 00:21:20.958 }, 00:21:20.958 "auth": { 00:21:20.958 "state": "completed", 00:21:20.958 "digest": "sha512", 00:21:20.958 "dhgroup": "ffdhe2048" 00:21:20.958 } 00:21:20.958 } 00:21:20.958 ]' 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:20.958 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.216 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.216 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.216 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.216 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:21.216 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.783 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:21.783 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.042 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:22.300 00:21:22.300 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:22.300 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:22.300 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:22.559 { 00:21:22.559 "cntlid": 109, 00:21:22.559 "qid": 0, 00:21:22.559 "state": "enabled", 00:21:22.559 "thread": "nvmf_tgt_poll_group_000", 00:21:22.559 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:22.559 "listen_address": { 00:21:22.559 "trtype": "TCP", 00:21:22.559 "adrfam": "IPv4", 00:21:22.559 "traddr": "10.0.0.2", 00:21:22.559 "trsvcid": "4420" 00:21:22.559 }, 00:21:22.559 "peer_address": { 00:21:22.559 "trtype": "TCP", 00:21:22.559 "adrfam": "IPv4", 00:21:22.559 "traddr": "10.0.0.1", 00:21:22.559 "trsvcid": "35072" 00:21:22.559 }, 00:21:22.559 "auth": { 00:21:22.559 "state": "completed", 00:21:22.559 "digest": "sha512", 00:21:22.559 "dhgroup": "ffdhe2048" 00:21:22.559 } 00:21:22.559 } 00:21:22.559 ]' 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.559 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.817 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:22.818 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.384 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.384 06:27:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.643 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:23.901 00:21:23.901 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.901 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.901 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:24.160 { 00:21:24.160 "cntlid": 111, 00:21:24.160 "qid": 0, 00:21:24.160 "state": "enabled", 00:21:24.160 "thread": "nvmf_tgt_poll_group_000", 00:21:24.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:24.160 "listen_address": { 00:21:24.160 "trtype": "TCP", 00:21:24.160 "adrfam": "IPv4", 00:21:24.160 "traddr": "10.0.0.2", 00:21:24.160 "trsvcid": "4420" 00:21:24.160 }, 00:21:24.160 "peer_address": { 00:21:24.160 "trtype": "TCP", 00:21:24.160 "adrfam": "IPv4", 00:21:24.160 "traddr": "10.0.0.1", 00:21:24.160 "trsvcid": "35084" 00:21:24.160 }, 00:21:24.160 "auth": { 00:21:24.160 "state": "completed", 00:21:24.160 "digest": "sha512", 00:21:24.160 "dhgroup": "ffdhe2048" 00:21:24.160 } 00:21:24.160 } 00:21:24.160 ]' 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.160 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:24.419 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:24.419 06:27:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:24.986 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.245 06:27:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:25.504 00:21:25.504 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.504 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.504 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.763 { 00:21:25.763 "cntlid": 113, 00:21:25.763 "qid": 0, 00:21:25.763 "state": "enabled", 00:21:25.763 "thread": "nvmf_tgt_poll_group_000", 00:21:25.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.763 "listen_address": { 00:21:25.763 "trtype": "TCP", 00:21:25.763 "adrfam": "IPv4", 00:21:25.763 "traddr": "10.0.0.2", 00:21:25.763 "trsvcid": "4420" 00:21:25.763 }, 00:21:25.763 "peer_address": { 00:21:25.763 "trtype": "TCP", 00:21:25.763 "adrfam": "IPv4", 00:21:25.763 "traddr": "10.0.0.1", 00:21:25.763 "trsvcid": "58518" 00:21:25.763 }, 00:21:25.763 "auth": { 00:21:25.763 "state": "completed", 00:21:25.763 "digest": "sha512", 00:21:25.763 "dhgroup": "ffdhe3072" 00:21:25.763 } 00:21:25.763 } 00:21:25.763 ]' 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.763 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:26.022 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:26.022 06:27:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.589 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.848 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:27.107 00:21:27.107 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:27.107 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.107 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.366 { 00:21:27.366 "cntlid": 115, 00:21:27.366 "qid": 0, 00:21:27.366 "state": "enabled", 00:21:27.366 "thread": "nvmf_tgt_poll_group_000", 00:21:27.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.366 "listen_address": { 00:21:27.366 "trtype": "TCP", 00:21:27.366 "adrfam": "IPv4", 00:21:27.366 "traddr": "10.0.0.2", 00:21:27.366 "trsvcid": "4420" 00:21:27.366 }, 00:21:27.366 "peer_address": { 00:21:27.366 "trtype": "TCP", 00:21:27.366 "adrfam": "IPv4", 00:21:27.366 "traddr": "10.0.0.1", 00:21:27.366 "trsvcid": "58550" 00:21:27.366 }, 00:21:27.366 "auth": { 00:21:27.366 "state": "completed", 00:21:27.366 "digest": "sha512", 00:21:27.366 "dhgroup": "ffdhe3072" 00:21:27.366 } 00:21:27.366 } 00:21:27.366 ]' 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.366 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.625 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:27.625 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.192 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.451 06:27:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.708 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.709 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.967 { 00:21:28.967 "cntlid": 117, 00:21:28.967 "qid": 0, 00:21:28.967 "state": "enabled", 00:21:28.967 "thread": "nvmf_tgt_poll_group_000", 00:21:28.967 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.967 "listen_address": { 00:21:28.967 "trtype": "TCP", 00:21:28.967 "adrfam": "IPv4", 00:21:28.967 "traddr": "10.0.0.2", 00:21:28.967 "trsvcid": "4420" 00:21:28.967 }, 00:21:28.967 "peer_address": { 00:21:28.967 "trtype": "TCP", 00:21:28.967 "adrfam": "IPv4", 00:21:28.967 "traddr": "10.0.0.1", 00:21:28.967 "trsvcid": "58582" 00:21:28.967 }, 00:21:28.967 "auth": { 00:21:28.967 "state": "completed", 00:21:28.967 "digest": "sha512", 00:21:28.967 "dhgroup": "ffdhe3072" 00:21:28.967 } 00:21:28.967 } 00:21:28.967 ]' 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:28.967 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.226 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:29.226 06:27:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:29.793 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.053 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:30.311 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.311 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.570 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.570 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.570 { 00:21:30.570 "cntlid": 119, 00:21:30.570 "qid": 0, 00:21:30.570 "state": "enabled", 00:21:30.570 "thread": "nvmf_tgt_poll_group_000", 00:21:30.570 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.570 "listen_address": { 00:21:30.570 "trtype": "TCP", 00:21:30.570 "adrfam": "IPv4", 00:21:30.570 "traddr": "10.0.0.2", 00:21:30.570 "trsvcid": "4420" 00:21:30.570 }, 00:21:30.570 "peer_address": { 00:21:30.570 "trtype": "TCP", 00:21:30.570 "adrfam": "IPv4", 00:21:30.570 "traddr": "10.0.0.1", 00:21:30.570 "trsvcid": "58606" 00:21:30.570 }, 00:21:30.570 "auth": { 00:21:30.570 "state": "completed", 00:21:30.570 "digest": "sha512", 00:21:30.570 "dhgroup": "ffdhe3072" 00:21:30.570 } 00:21:30.570 } 00:21:30.570 ]' 00:21:30.570 06:27:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.570 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.829 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:30.829 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.396 06:27:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.396 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.654 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.654 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.654 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.654 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:31.912 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.913 { 00:21:31.913 "cntlid": 121, 00:21:31.913 "qid": 0, 00:21:31.913 "state": "enabled", 00:21:31.913 "thread": "nvmf_tgt_poll_group_000", 00:21:31.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.913 "listen_address": { 00:21:31.913 "trtype": "TCP", 00:21:31.913 "adrfam": "IPv4", 00:21:31.913 "traddr": "10.0.0.2", 00:21:31.913 "trsvcid": "4420" 00:21:31.913 }, 00:21:31.913 "peer_address": { 00:21:31.913 "trtype": "TCP", 00:21:31.913 "adrfam": "IPv4", 00:21:31.913 "traddr": "10.0.0.1", 00:21:31.913 "trsvcid": "58628" 00:21:31.913 }, 00:21:31.913 "auth": { 00:21:31.913 "state": "completed", 00:21:31.913 "digest": "sha512", 00:21:31.913 "dhgroup": "ffdhe4096" 00:21:31.913 } 00:21:31.913 } 00:21:31.913 ]' 00:21:31.913 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:32.171 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.429 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:32.430 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:32.996 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.255 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:33.514 00:21:33.514 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.514 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.514 06:27:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.514 { 00:21:33.514 "cntlid": 123, 00:21:33.514 "qid": 0, 00:21:33.514 "state": "enabled", 00:21:33.514 "thread": "nvmf_tgt_poll_group_000", 00:21:33.514 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.514 "listen_address": { 00:21:33.514 "trtype": "TCP", 00:21:33.514 "adrfam": "IPv4", 00:21:33.514 "traddr": "10.0.0.2", 00:21:33.514 "trsvcid": "4420" 00:21:33.514 }, 00:21:33.514 "peer_address": { 00:21:33.514 "trtype": "TCP", 00:21:33.514 "adrfam": "IPv4", 00:21:33.514 "traddr": "10.0.0.1", 00:21:33.514 "trsvcid": "58670" 00:21:33.514 }, 00:21:33.514 "auth": { 00:21:33.514 "state": "completed", 00:21:33.514 "digest": "sha512", 00:21:33.514 "dhgroup": "ffdhe4096" 00:21:33.514 } 00:21:33.514 } 00:21:33.514 ]' 00:21:33.514 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.773 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.031 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:34.031 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:34.601 06:27:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.601 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.601 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:34.860 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:35.118 { 00:21:35.118 "cntlid": 125, 00:21:35.118 "qid": 0, 00:21:35.118 "state": "enabled", 00:21:35.118 "thread": "nvmf_tgt_poll_group_000", 00:21:35.118 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:35.118 "listen_address": { 00:21:35.118 "trtype": "TCP", 00:21:35.118 "adrfam": "IPv4", 00:21:35.118 "traddr": "10.0.0.2", 00:21:35.118 "trsvcid": "4420" 00:21:35.118 }, 00:21:35.118 "peer_address": { 00:21:35.118 "trtype": "TCP", 00:21:35.118 "adrfam": "IPv4", 00:21:35.118 "traddr": "10.0.0.1", 00:21:35.118 "trsvcid": "47982" 00:21:35.118 }, 00:21:35.118 "auth": { 00:21:35.118 "state": "completed", 00:21:35.118 "digest": "sha512", 00:21:35.118 "dhgroup": "ffdhe4096" 00:21:35.118 } 00:21:35.118 } 00:21:35.118 ]' 00:21:35.118 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.377 06:27:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.635 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:35.635 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.203 06:27:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:36.461 00:21:36.461 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.461 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.461 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.720 { 00:21:36.720 "cntlid": 127, 00:21:36.720 "qid": 0, 00:21:36.720 "state": "enabled", 00:21:36.720 "thread": "nvmf_tgt_poll_group_000", 00:21:36.720 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.720 "listen_address": { 00:21:36.720 "trtype": "TCP", 00:21:36.720 "adrfam": "IPv4", 00:21:36.720 "traddr": "10.0.0.2", 00:21:36.720 "trsvcid": "4420" 00:21:36.720 }, 00:21:36.720 "peer_address": { 00:21:36.720 "trtype": "TCP", 00:21:36.720 "adrfam": "IPv4", 00:21:36.720 "traddr": "10.0.0.1", 00:21:36.720 "trsvcid": "47994" 00:21:36.720 }, 00:21:36.720 "auth": { 00:21:36.720 "state": "completed", 00:21:36.720 "digest": "sha512", 00:21:36.720 "dhgroup": "ffdhe4096" 00:21:36.720 } 00:21:36.720 } 00:21:36.720 ]' 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.720 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:36.979 06:27:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:37.546 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.546 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.546 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.546 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.546 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:37.804 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:38.372 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.372 { 00:21:38.372 "cntlid": 129, 00:21:38.372 "qid": 0, 00:21:38.372 "state": "enabled", 00:21:38.372 "thread": "nvmf_tgt_poll_group_000", 00:21:38.372 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.372 "listen_address": { 00:21:38.372 "trtype": "TCP", 00:21:38.372 "adrfam": "IPv4", 00:21:38.372 "traddr": "10.0.0.2", 00:21:38.372 "trsvcid": "4420" 00:21:38.372 }, 00:21:38.372 "peer_address": { 00:21:38.372 "trtype": "TCP", 00:21:38.372 "adrfam": "IPv4", 00:21:38.372 "traddr": "10.0.0.1", 00:21:38.372 "trsvcid": "48016" 00:21:38.372 }, 00:21:38.372 "auth": { 00:21:38.372 "state": "completed", 00:21:38.372 "digest": "sha512", 00:21:38.372 "dhgroup": "ffdhe6144" 00:21:38.372 } 00:21:38.372 } 00:21:38.372 ]' 00:21:38.372 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.631 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.889 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:38.889 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.457 06:27:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.457 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.716 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.716 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.716 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.716 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:39.974 00:21:39.974 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.974 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.974 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.233 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:40.233 { 00:21:40.233 "cntlid": 131, 00:21:40.233 "qid": 0, 00:21:40.233 "state": "enabled", 00:21:40.233 "thread": "nvmf_tgt_poll_group_000", 00:21:40.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:40.233 "listen_address": { 00:21:40.233 "trtype": "TCP", 00:21:40.233 "adrfam": "IPv4", 00:21:40.233 "traddr": "10.0.0.2", 00:21:40.233 "trsvcid": "4420" 00:21:40.233 }, 00:21:40.234 "peer_address": { 00:21:40.234 "trtype": "TCP", 00:21:40.234 "adrfam": "IPv4", 00:21:40.234 "traddr": "10.0.0.1", 00:21:40.234 "trsvcid": "48038" 00:21:40.234 }, 00:21:40.234 "auth": { 00:21:40.234 "state": "completed", 00:21:40.234 "digest": "sha512", 00:21:40.234 "dhgroup": "ffdhe6144" 00:21:40.234 } 00:21:40.234 } 00:21:40.234 ]' 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.234 06:27:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.492 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:40.492 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.059 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.318 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.319 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:41.577 00:21:41.577 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.577 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.577 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.836 { 00:21:41.836 "cntlid": 133, 00:21:41.836 "qid": 0, 00:21:41.836 "state": "enabled", 00:21:41.836 "thread": "nvmf_tgt_poll_group_000", 00:21:41.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.836 "listen_address": { 00:21:41.836 "trtype": "TCP", 00:21:41.836 "adrfam": "IPv4", 00:21:41.836 "traddr": "10.0.0.2", 00:21:41.836 "trsvcid": "4420" 00:21:41.836 }, 00:21:41.836 "peer_address": { 00:21:41.836 "trtype": "TCP", 00:21:41.836 "adrfam": "IPv4", 00:21:41.836 "traddr": "10.0.0.1", 00:21:41.836 "trsvcid": "48072" 00:21:41.836 }, 00:21:41.836 "auth": { 00:21:41.836 "state": "completed", 00:21:41.836 "digest": "sha512", 00:21:41.836 "dhgroup": "ffdhe6144" 00:21:41.836 } 00:21:41.836 } 00:21:41.836 ]' 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.836 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:42.095 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:42.095 06:27:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:42.662 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.662 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.662 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.662 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.662 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.662 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.663 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.663 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.663 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:42.922 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:43.180 00:21:43.180 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:43.180 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:43.180 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:43.440 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.440 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:43.440 { 00:21:43.440 "cntlid": 135, 00:21:43.440 "qid": 0, 00:21:43.440 "state": "enabled", 00:21:43.440 "thread": "nvmf_tgt_poll_group_000", 00:21:43.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:43.440 "listen_address": { 00:21:43.440 "trtype": "TCP", 00:21:43.440 "adrfam": "IPv4", 00:21:43.440 "traddr": "10.0.0.2", 00:21:43.440 "trsvcid": "4420" 00:21:43.440 }, 00:21:43.440 "peer_address": { 00:21:43.440 "trtype": "TCP", 00:21:43.440 "adrfam": "IPv4", 00:21:43.440 "traddr": "10.0.0.1", 00:21:43.440 "trsvcid": "48092" 00:21:43.440 }, 00:21:43.440 "auth": { 00:21:43.440 "state": "completed", 00:21:43.440 "digest": "sha512", 00:21:43.440 "dhgroup": "ffdhe6144" 00:21:43.440 } 00:21:43.440 } 00:21:43.440 ]' 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:43.440 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:43.697 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:44.263 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.263 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.263 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:44.263 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.263 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.522 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.522 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:44.522 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:44.522 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.522 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:44.522 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:45.089 00:21:45.089 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.089 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.089 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.348 { 00:21:45.348 "cntlid": 137, 00:21:45.348 "qid": 0, 00:21:45.348 "state": "enabled", 00:21:45.348 "thread": "nvmf_tgt_poll_group_000", 00:21:45.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.348 "listen_address": { 00:21:45.348 "trtype": "TCP", 00:21:45.348 "adrfam": "IPv4", 00:21:45.348 "traddr": "10.0.0.2", 00:21:45.348 "trsvcid": "4420" 00:21:45.348 }, 00:21:45.348 "peer_address": { 00:21:45.348 "trtype": "TCP", 00:21:45.348 "adrfam": "IPv4", 00:21:45.348 "traddr": "10.0.0.1", 00:21:45.348 "trsvcid": "48122" 00:21:45.348 }, 00:21:45.348 "auth": { 00:21:45.348 "state": "completed", 00:21:45.348 "digest": "sha512", 00:21:45.348 "dhgroup": "ffdhe8192" 00:21:45.348 } 00:21:45.348 } 00:21:45.348 ]' 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.348 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.607 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:45.607 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.174 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.174 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.432 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.433 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:46.998 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:46.998 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:46.998 { 00:21:46.998 "cntlid": 139, 00:21:46.998 "qid": 0, 00:21:46.998 "state": "enabled", 00:21:46.998 "thread": "nvmf_tgt_poll_group_000", 00:21:46.998 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:46.998 "listen_address": { 00:21:46.999 "trtype": "TCP", 00:21:46.999 "adrfam": "IPv4", 00:21:46.999 "traddr": "10.0.0.2", 00:21:46.999 "trsvcid": "4420" 00:21:46.999 }, 00:21:46.999 "peer_address": { 00:21:46.999 "trtype": "TCP", 00:21:46.999 "adrfam": "IPv4", 00:21:46.999 "traddr": "10.0.0.1", 00:21:46.999 "trsvcid": "51992" 00:21:46.999 }, 00:21:46.999 "auth": { 00:21:46.999 "state": "completed", 00:21:46.999 "digest": "sha512", 00:21:46.999 "dhgroup": "ffdhe8192" 00:21:46.999 } 00:21:46.999 } 00:21:46.999 ]' 00:21:46.999 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.257 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.516 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:47.516 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: --dhchap-ctrl-secret DHHC-1:02:Y2RlNWQ5YjJkZDVlOGMwYjJkODlmOWQ0OGY0YTM0MjVlYWQwNmE2MTE5NTliZGUzi6o1Qg==: 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.086 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:48.703 00:21:48.703 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:48.703 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:48.703 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:48.978 { 00:21:48.978 "cntlid": 141, 00:21:48.978 "qid": 0, 00:21:48.978 "state": "enabled", 00:21:48.978 "thread": "nvmf_tgt_poll_group_000", 00:21:48.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:48.978 "listen_address": { 00:21:48.978 "trtype": "TCP", 00:21:48.978 "adrfam": "IPv4", 00:21:48.978 "traddr": "10.0.0.2", 00:21:48.978 "trsvcid": "4420" 00:21:48.978 }, 00:21:48.978 "peer_address": { 00:21:48.978 "trtype": "TCP", 00:21:48.978 "adrfam": "IPv4", 00:21:48.978 "traddr": "10.0.0.1", 00:21:48.978 "trsvcid": "52018" 00:21:48.978 }, 00:21:48.978 "auth": { 00:21:48.978 "state": "completed", 00:21:48.978 "digest": "sha512", 00:21:48.978 "dhgroup": "ffdhe8192" 00:21:48.978 } 00:21:48.978 } 00:21:48.978 ]' 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:48.978 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.236 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:49.236 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:01:Njk2Y2E0Zjk2YTM2NGYyMzUzNTIzNGZkYzkwNzUzMmO9uSdj: 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:49.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:49.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:21:50.061 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:21:50.061 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.061 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.061 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.062 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:50.628 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.628 { 00:21:50.628 "cntlid": 143, 00:21:50.628 "qid": 0, 00:21:50.628 "state": "enabled", 00:21:50.628 "thread": "nvmf_tgt_poll_group_000", 00:21:50.628 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.628 "listen_address": { 00:21:50.628 "trtype": "TCP", 00:21:50.628 "adrfam": "IPv4", 00:21:50.628 "traddr": "10.0.0.2", 00:21:50.628 "trsvcid": "4420" 00:21:50.628 }, 00:21:50.628 "peer_address": { 00:21:50.628 "trtype": "TCP", 00:21:50.628 "adrfam": "IPv4", 00:21:50.628 "traddr": "10.0.0.1", 00:21:50.628 "trsvcid": "52036" 00:21:50.628 }, 00:21:50.628 "auth": { 00:21:50.628 "state": "completed", 00:21:50.628 "digest": "sha512", 00:21:50.628 "dhgroup": "ffdhe8192" 00:21:50.628 } 00:21:50.628 } 00:21:50.628 ]' 00:21:50.628 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:50.887 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.145 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:51.145 06:27:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.712 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.712 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.971 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:52.229 00:21:52.229 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.229 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.229 06:27:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.488 { 00:21:52.488 "cntlid": 145, 00:21:52.488 "qid": 0, 00:21:52.488 "state": "enabled", 00:21:52.488 "thread": "nvmf_tgt_poll_group_000", 00:21:52.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.488 "listen_address": { 00:21:52.488 "trtype": "TCP", 00:21:52.488 "adrfam": "IPv4", 00:21:52.488 "traddr": "10.0.0.2", 00:21:52.488 "trsvcid": "4420" 00:21:52.488 }, 00:21:52.488 "peer_address": { 00:21:52.488 "trtype": "TCP", 00:21:52.488 "adrfam": "IPv4", 00:21:52.488 "traddr": "10.0.0.1", 00:21:52.488 "trsvcid": "52058" 00:21:52.488 }, 00:21:52.488 "auth": { 00:21:52.488 "state": "completed", 00:21:52.488 "digest": "sha512", 00:21:52.488 "dhgroup": "ffdhe8192" 00:21:52.488 } 00:21:52.488 } 00:21:52.488 ]' 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.488 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.746 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:52.746 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.746 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.746 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.746 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:53.005 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:53.005 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ODdkNTdiYzlhZDRiNmVjOGM0MDgwM2YzYzUwZTI3NTg3NmQwMDQzNTE3MDUyOTkwb25VPQ==: --dhchap-ctrl-secret DHHC-1:03:ZDVhYjllZTFhMzA3OGZiYzM4YmE3ODIzMjdjMTU0YzBjNmRkZGRmMmMyNjkyZTI4NTFmOGUwYmM5NDRjYjEwYRT+nUA=: 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.573 06:27:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:53.573 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:21:53.832 request: 00:21:53.832 { 00:21:53.832 "name": "nvme0", 00:21:53.832 "trtype": "tcp", 00:21:53.832 "traddr": "10.0.0.2", 00:21:53.832 "adrfam": "ipv4", 00:21:53.832 "trsvcid": "4420", 00:21:53.832 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:53.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:53.832 "prchk_reftag": false, 00:21:53.832 "prchk_guard": false, 00:21:53.832 "hdgst": false, 00:21:53.832 "ddgst": false, 00:21:53.832 "dhchap_key": "key2", 00:21:53.832 "allow_unrecognized_csi": false, 00:21:53.832 "method": "bdev_nvme_attach_controller", 00:21:53.832 "req_id": 1 00:21:53.832 } 00:21:53.832 Got JSON-RPC error response 00:21:53.832 response: 00:21:53.832 { 00:21:53.832 "code": -5, 00:21:53.832 "message": "Input/output error" 00:21:53.832 } 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:53.832 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:53.833 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:21:54.400 request: 00:21:54.400 { 00:21:54.400 "name": "nvme0", 00:21:54.400 "trtype": "tcp", 00:21:54.400 "traddr": "10.0.0.2", 00:21:54.400 "adrfam": "ipv4", 00:21:54.400 "trsvcid": "4420", 00:21:54.400 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.400 "prchk_reftag": false, 00:21:54.400 "prchk_guard": false, 00:21:54.400 "hdgst": false, 00:21:54.400 "ddgst": false, 00:21:54.400 "dhchap_key": "key1", 00:21:54.400 "dhchap_ctrlr_key": "ckey2", 00:21:54.400 "allow_unrecognized_csi": false, 00:21:54.400 "method": "bdev_nvme_attach_controller", 00:21:54.400 "req_id": 1 00:21:54.400 } 00:21:54.400 Got JSON-RPC error response 00:21:54.400 response: 00:21:54.400 { 00:21:54.400 "code": -5, 00:21:54.400 "message": "Input/output error" 00:21:54.400 } 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.400 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.968 request: 00:21:54.968 { 00:21:54.968 "name": "nvme0", 00:21:54.968 "trtype": "tcp", 00:21:54.968 "traddr": "10.0.0.2", 00:21:54.968 "adrfam": "ipv4", 00:21:54.968 "trsvcid": "4420", 00:21:54.968 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:54.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:54.968 "prchk_reftag": false, 00:21:54.968 "prchk_guard": false, 00:21:54.968 "hdgst": false, 00:21:54.968 "ddgst": false, 00:21:54.968 "dhchap_key": "key1", 00:21:54.968 "dhchap_ctrlr_key": "ckey1", 00:21:54.968 "allow_unrecognized_csi": false, 00:21:54.968 "method": "bdev_nvme_attach_controller", 00:21:54.968 "req_id": 1 00:21:54.968 } 00:21:54.968 Got JSON-RPC error response 00:21:54.968 response: 00:21:54.968 { 00:21:54.968 "code": -5, 00:21:54.968 "message": "Input/output error" 00:21:54.968 } 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982318 ']' 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982318' 00:21:54.968 killing process with pid 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982318 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:54.968 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=1003801 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 1003801 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003801 ']' 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1003801 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1003801 ']' 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.227 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.486 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.486 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:21:55.486 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:21:55.486 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.486 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 null0 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QmK 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.WAj ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.WAj 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.11W 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.R7q ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.R7q 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.OhD 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.G0h ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.G0h 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.D9p 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:55.745 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:56.681 nvme0n1 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:56.681 { 00:21:56.681 "cntlid": 1, 00:21:56.681 "qid": 0, 00:21:56.681 "state": "enabled", 00:21:56.681 "thread": "nvmf_tgt_poll_group_000", 00:21:56.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:56.681 "listen_address": { 00:21:56.681 "trtype": "TCP", 00:21:56.681 "adrfam": "IPv4", 00:21:56.681 "traddr": "10.0.0.2", 00:21:56.681 "trsvcid": "4420" 00:21:56.681 }, 00:21:56.681 "peer_address": { 00:21:56.681 "trtype": "TCP", 00:21:56.681 "adrfam": "IPv4", 00:21:56.681 "traddr": "10.0.0.1", 00:21:56.681 "trsvcid": "43296" 00:21:56.681 }, 00:21:56.681 "auth": { 00:21:56.681 "state": "completed", 00:21:56.681 "digest": "sha512", 00:21:56.681 "dhgroup": "ffdhe8192" 00:21:56.681 } 00:21:56.681 } 00:21:56.681 ]' 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:56.681 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:56.940 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:56.940 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:56.940 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.940 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:56.940 06:27:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:57.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:21:57.507 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:57.766 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.024 request: 00:21:58.024 { 00:21:58.024 "name": "nvme0", 00:21:58.024 "trtype": "tcp", 00:21:58.024 "traddr": "10.0.0.2", 00:21:58.024 "adrfam": "ipv4", 00:21:58.024 "trsvcid": "4420", 00:21:58.024 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.024 "prchk_reftag": false, 00:21:58.024 "prchk_guard": false, 00:21:58.024 "hdgst": false, 00:21:58.024 "ddgst": false, 00:21:58.024 "dhchap_key": "key3", 00:21:58.024 "allow_unrecognized_csi": false, 00:21:58.024 "method": "bdev_nvme_attach_controller", 00:21:58.024 "req_id": 1 00:21:58.024 } 00:21:58.024 Got JSON-RPC error response 00:21:58.024 response: 00:21:58.024 { 00:21:58.024 "code": -5, 00:21:58.024 "message": "Input/output error" 00:21:58.024 } 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:58.024 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.283 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:58.541 request: 00:21:58.541 { 00:21:58.541 "name": "nvme0", 00:21:58.541 "trtype": "tcp", 00:21:58.541 "traddr": "10.0.0.2", 00:21:58.541 "adrfam": "ipv4", 00:21:58.541 "trsvcid": "4420", 00:21:58.541 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:58.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.541 "prchk_reftag": false, 00:21:58.541 "prchk_guard": false, 00:21:58.541 "hdgst": false, 00:21:58.541 "ddgst": false, 00:21:58.541 "dhchap_key": "key3", 00:21:58.542 "allow_unrecognized_csi": false, 00:21:58.542 "method": "bdev_nvme_attach_controller", 00:21:58.542 "req_id": 1 00:21:58.542 } 00:21:58.542 Got JSON-RPC error response 00:21:58.542 response: 00:21:58.542 { 00:21:58.542 "code": -5, 00:21:58.542 "message": "Input/output error" 00:21:58.542 } 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.542 06:27:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.542 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:58.800 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:59.061 request: 00:21:59.061 { 00:21:59.061 "name": "nvme0", 00:21:59.061 "trtype": "tcp", 00:21:59.061 "traddr": "10.0.0.2", 00:21:59.061 "adrfam": "ipv4", 00:21:59.061 "trsvcid": "4420", 00:21:59.061 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:21:59.061 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:59.061 "prchk_reftag": false, 00:21:59.061 "prchk_guard": false, 00:21:59.061 "hdgst": false, 00:21:59.061 "ddgst": false, 00:21:59.061 "dhchap_key": "key0", 00:21:59.061 "dhchap_ctrlr_key": "key1", 00:21:59.061 "allow_unrecognized_csi": false, 00:21:59.061 "method": "bdev_nvme_attach_controller", 00:21:59.061 "req_id": 1 00:21:59.061 } 00:21:59.061 Got JSON-RPC error response 00:21:59.061 response: 00:21:59.061 { 00:21:59.061 "code": -5, 00:21:59.061 "message": "Input/output error" 00:21:59.061 } 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:59.061 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:21:59.320 nvme0n1 00:21:59.320 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:21:59.320 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:21:59.320 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.578 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.578 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.578 06:27:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:21:59.578 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:00.514 nvme0n1 00:22:00.514 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:00.514 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:00.514 06:27:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:00.514 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.772 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.772 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:22:00.772 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: --dhchap-ctrl-secret DHHC-1:03:NjM3NjBkMzY5MDllM2UzODgxZmI0MWMwOTkzMjQ1NDU4ZjY0MzY4NWU0YTkwMjhjYzA5OWI4NWVhZWYxMmUxM0A2UuU=: 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:01.339 06:27:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:01.597 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:02.165 request: 00:22:02.165 { 00:22:02.165 "name": "nvme0", 00:22:02.165 "trtype": "tcp", 00:22:02.165 "traddr": "10.0.0.2", 00:22:02.165 "adrfam": "ipv4", 00:22:02.165 "trsvcid": "4420", 00:22:02.165 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:02.165 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:02.165 "prchk_reftag": false, 00:22:02.165 "prchk_guard": false, 00:22:02.165 "hdgst": false, 00:22:02.165 "ddgst": false, 00:22:02.165 "dhchap_key": "key1", 00:22:02.165 "allow_unrecognized_csi": false, 00:22:02.165 "method": "bdev_nvme_attach_controller", 00:22:02.165 "req_id": 1 00:22:02.165 } 00:22:02.165 Got JSON-RPC error response 00:22:02.165 response: 00:22:02.165 { 00:22:02.165 "code": -5, 00:22:02.165 "message": "Input/output error" 00:22:02.165 } 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.165 06:27:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:02.732 nvme0n1 00:22:02.732 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:02.732 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:02.732 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.990 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.990 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.990 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:03.249 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:03.513 nvme0n1 00:22:03.513 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:03.513 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:03.513 06:27:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:03.513 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:03.513 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:03.513 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: '' 2s 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: ]] 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OGUxYjMzOTk1NmVkNmZhZTk5ZWI3NDYxNzc1YjRkYjKBKj71: 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:03.773 06:27:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: 2s 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: ]] 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MWU5Nzc3NDJhZWYwMWIwNDc1YmEzODllNGEyNWMwNjBjODVkNzVmMDQxNDU4M2I3I4tHmw==: 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:06.304 06:27:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.207 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.207 06:27:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:08.774 nvme0n1 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:08.774 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:09.341 06:28:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:09.599 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:09.599 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:09.599 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:09.858 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:10.117 request: 00:22:10.117 { 00:22:10.117 "name": "nvme0", 00:22:10.117 "dhchap_key": "key1", 00:22:10.117 "dhchap_ctrlr_key": "key3", 00:22:10.117 "method": "bdev_nvme_set_keys", 00:22:10.117 "req_id": 1 00:22:10.117 } 00:22:10.117 Got JSON-RPC error response 00:22:10.117 response: 00:22:10.117 { 00:22:10.117 "code": -13, 00:22:10.117 "message": "Permission denied" 00:22:10.117 } 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:10.375 06:28:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:11.751 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:11.751 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:11.751 06:28:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:11.751 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:11.752 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:12.318 nvme0n1 00:22:12.318 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:12.318 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.318 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:12.576 06:28:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:12.835 request: 00:22:12.835 { 00:22:12.835 "name": "nvme0", 00:22:12.835 "dhchap_key": "key2", 00:22:12.835 "dhchap_ctrlr_key": "key0", 00:22:12.835 "method": "bdev_nvme_set_keys", 00:22:12.835 "req_id": 1 00:22:12.835 } 00:22:12.835 Got JSON-RPC error response 00:22:12.835 response: 00:22:12.835 { 00:22:12.835 "code": -13, 00:22:12.835 "message": "Permission denied" 00:22:12.835 } 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:12.835 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.094 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:13.094 06:28:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:14.029 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:14.029 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:14.029 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 982347 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 982347 ']' 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 982347 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 982347 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 982347' 00:22:14.288 killing process with pid 982347 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 982347 00:22:14.288 06:28:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 982347 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:14.547 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:14.547 rmmod nvme_tcp 00:22:14.806 rmmod nvme_fabrics 00:22:14.806 rmmod nvme_keyring 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 1003801 ']' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1003801 ']' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1003801' 00:22:14.806 killing process with pid 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1003801 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:14.806 06:28:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.QmK /tmp/spdk.key-sha256.11W /tmp/spdk.key-sha384.OhD /tmp/spdk.key-sha512.D9p /tmp/spdk.key-sha512.WAj /tmp/spdk.key-sha384.R7q /tmp/spdk.key-sha256.G0h '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:17.343 00:22:17.343 real 2m31.694s 00:22:17.343 user 5m49.867s 00:22:17.343 sys 0m24.218s 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.343 ************************************ 00:22:17.343 END TEST nvmf_auth_target 00:22:17.343 ************************************ 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:17.343 ************************************ 00:22:17.343 START TEST nvmf_bdevio_no_huge 00:22:17.343 ************************************ 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:17.343 * Looking for test storage... 00:22:17.343 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:17.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.343 --rc genhtml_branch_coverage=1 00:22:17.343 --rc genhtml_function_coverage=1 00:22:17.343 --rc genhtml_legend=1 00:22:17.343 --rc geninfo_all_blocks=1 00:22:17.343 --rc geninfo_unexecuted_blocks=1 00:22:17.343 00:22:17.343 ' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:17.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.343 --rc genhtml_branch_coverage=1 00:22:17.343 --rc genhtml_function_coverage=1 00:22:17.343 --rc genhtml_legend=1 00:22:17.343 --rc geninfo_all_blocks=1 00:22:17.343 --rc geninfo_unexecuted_blocks=1 00:22:17.343 00:22:17.343 ' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:17.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.343 --rc genhtml_branch_coverage=1 00:22:17.343 --rc genhtml_function_coverage=1 00:22:17.343 --rc genhtml_legend=1 00:22:17.343 --rc geninfo_all_blocks=1 00:22:17.343 --rc geninfo_unexecuted_blocks=1 00:22:17.343 00:22:17.343 ' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:17.343 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:17.343 --rc genhtml_branch_coverage=1 00:22:17.343 --rc genhtml_function_coverage=1 00:22:17.343 --rc genhtml_legend=1 00:22:17.343 --rc geninfo_all_blocks=1 00:22:17.343 --rc geninfo_unexecuted_blocks=1 00:22:17.343 00:22:17.343 ' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:17.343 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:17.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:17.344 06:28:08 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:23.990 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:23.991 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:23.991 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:23.991 Found net devices under 0000:af:00.0: cvl_0_0 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:23.991 Found net devices under 0000:af:00.1: cvl_0_1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:23.991 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:23.991 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:22:23.991 00:22:23.991 --- 10.0.0.2 ping statistics --- 00:22:23.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.991 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:23.991 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:23.991 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:22:23.991 00:22:23.991 --- 10.0.0.1 ping statistics --- 00:22:23.991 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:23.991 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.991 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=1010517 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 1010517 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1010517 ']' 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:23.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 [2024-12-13 06:28:14.722315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:23.992 [2024-12-13 06:28:14.722361] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:23.992 [2024-12-13 06:28:14.803924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:23.992 [2024-12-13 06:28:14.839380] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:23.992 [2024-12-13 06:28:14.839412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:23.992 [2024-12-13 06:28:14.839419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:23.992 [2024-12-13 06:28:14.839424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:23.992 [2024-12-13 06:28:14.839429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:23.992 [2024-12-13 06:28:14.840523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:23.992 [2024-12-13 06:28:14.840629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:23.992 [2024-12-13 06:28:14.840663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:23.992 [2024-12-13 06:28:14.840664] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 [2024-12-13 06:28:14.988932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 06:28:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 Malloc0 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:23.992 [2024-12-13 06:28:15.033230] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:23.992 { 00:22:23.992 "params": { 00:22:23.992 "name": "Nvme$subsystem", 00:22:23.992 "trtype": "$TEST_TRANSPORT", 00:22:23.992 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:23.992 "adrfam": "ipv4", 00:22:23.992 "trsvcid": "$NVMF_PORT", 00:22:23.992 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:23.992 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:23.992 "hdgst": ${hdgst:-false}, 00:22:23.992 "ddgst": ${ddgst:-false} 00:22:23.992 }, 00:22:23.992 "method": "bdev_nvme_attach_controller" 00:22:23.992 } 00:22:23.992 EOF 00:22:23.992 )") 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:23.992 06:28:15 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:23.992 "params": { 00:22:23.992 "name": "Nvme1", 00:22:23.992 "trtype": "tcp", 00:22:23.992 "traddr": "10.0.0.2", 00:22:23.992 "adrfam": "ipv4", 00:22:23.992 "trsvcid": "4420", 00:22:23.992 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.992 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.992 "hdgst": false, 00:22:23.992 "ddgst": false 00:22:23.992 }, 00:22:23.992 "method": "bdev_nvme_attach_controller" 00:22:23.992 }' 00:22:23.992 [2024-12-13 06:28:15.083488] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:23.992 [2024-12-13 06:28:15.083557] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1010601 ] 00:22:23.992 [2024-12-13 06:28:15.166116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:23.992 [2024-12-13 06:28:15.203319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:23.992 [2024-12-13 06:28:15.203428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.992 [2024-12-13 06:28:15.203427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:23.992 I/O targets: 00:22:23.992 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:23.992 00:22:23.992 00:22:23.992 CUnit - A unit testing framework for C - Version 2.1-3 00:22:23.992 http://cunit.sourceforge.net/ 00:22:23.992 00:22:23.992 00:22:23.992 Suite: bdevio tests on: Nvme1n1 00:22:23.992 Test: blockdev write read block ...passed 00:22:23.992 Test: blockdev write zeroes read block ...passed 00:22:23.992 Test: blockdev write zeroes read no split ...passed 00:22:23.992 Test: blockdev write zeroes read split ...passed 00:22:23.992 Test: blockdev write zeroes read split partial ...passed 00:22:23.992 Test: blockdev reset ...[2024-12-13 06:28:15.565719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:23.992 [2024-12-13 06:28:15.565780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x159bd00 (9): Bad file descriptor 00:22:23.992 [2024-12-13 06:28:15.621320] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:23.992 passed 00:22:24.251 Test: blockdev write read 8 blocks ...passed 00:22:24.251 Test: blockdev write read size > 128k ...passed 00:22:24.251 Test: blockdev write read invalid size ...passed 00:22:24.251 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:24.251 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:24.251 Test: blockdev write read max offset ...passed 00:22:24.251 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:24.251 Test: blockdev writev readv 8 blocks ...passed 00:22:24.251 Test: blockdev writev readv 30 x 1block ...passed 00:22:24.251 Test: blockdev writev readv block ...passed 00:22:24.251 Test: blockdev writev readv size > 128k ...passed 00:22:24.251 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:24.251 Test: blockdev comparev and writev ...[2024-12-13 06:28:15.872485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.872518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.872533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.872541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.872773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.872783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.872794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.872800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.873022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.873032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.873043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.873050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.873279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.873288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:24.251 [2024-12-13 06:28:15.873299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:24.251 [2024-12-13 06:28:15.873306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:24.510 passed 00:22:24.510 Test: blockdev nvme passthru rw ...passed 00:22:24.510 Test: blockdev nvme passthru vendor specific ...[2024-12-13 06:28:15.955815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.510 [2024-12-13 06:28:15.955830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:24.510 [2024-12-13 06:28:15.955936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.510 [2024-12-13 06:28:15.955946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:24.510 [2024-12-13 06:28:15.956046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.510 [2024-12-13 06:28:15.956055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:24.510 [2024-12-13 06:28:15.956152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:24.510 [2024-12-13 06:28:15.956161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:24.510 passed 00:22:24.510 Test: blockdev nvme admin passthru ...passed 00:22:24.510 Test: blockdev copy ...passed 00:22:24.510 00:22:24.510 Run Summary: Type Total Ran Passed Failed Inactive 00:22:24.510 suites 1 1 n/a 0 0 00:22:24.510 tests 23 23 23 0 0 00:22:24.510 asserts 152 152 152 0 n/a 00:22:24.510 00:22:24.510 Elapsed time = 1.140 seconds 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:24.767 rmmod nvme_tcp 00:22:24.767 rmmod nvme_fabrics 00:22:24.767 rmmod nvme_keyring 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:24.767 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 1010517 ']' 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 1010517 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1010517 ']' 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1010517 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1010517 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1010517' 00:22:24.768 killing process with pid 1010517 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1010517 00:22:24.768 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1010517 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:25.026 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:25.284 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:25.284 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:25.284 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:25.284 06:28:16 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:27.189 00:22:27.189 real 0m10.142s 00:22:27.189 user 0m10.950s 00:22:27.189 sys 0m5.310s 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.189 ************************************ 00:22:27.189 END TEST nvmf_bdevio_no_huge 00:22:27.189 ************************************ 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:27.189 ************************************ 00:22:27.189 START TEST nvmf_tls 00:22:27.189 ************************************ 00:22:27.189 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:27.449 * Looking for test storage... 00:22:27.449 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:27.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.449 --rc genhtml_branch_coverage=1 00:22:27.449 --rc genhtml_function_coverage=1 00:22:27.449 --rc genhtml_legend=1 00:22:27.449 --rc geninfo_all_blocks=1 00:22:27.449 --rc geninfo_unexecuted_blocks=1 00:22:27.449 00:22:27.449 ' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:27.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.449 --rc genhtml_branch_coverage=1 00:22:27.449 --rc genhtml_function_coverage=1 00:22:27.449 --rc genhtml_legend=1 00:22:27.449 --rc geninfo_all_blocks=1 00:22:27.449 --rc geninfo_unexecuted_blocks=1 00:22:27.449 00:22:27.449 ' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:27.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.449 --rc genhtml_branch_coverage=1 00:22:27.449 --rc genhtml_function_coverage=1 00:22:27.449 --rc genhtml_legend=1 00:22:27.449 --rc geninfo_all_blocks=1 00:22:27.449 --rc geninfo_unexecuted_blocks=1 00:22:27.449 00:22:27.449 ' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:27.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:27.449 --rc genhtml_branch_coverage=1 00:22:27.449 --rc genhtml_function_coverage=1 00:22:27.449 --rc genhtml_legend=1 00:22:27.449 --rc geninfo_all_blocks=1 00:22:27.449 --rc geninfo_unexecuted_blocks=1 00:22:27.449 00:22:27.449 ' 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:27.449 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:27.449 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:27.450 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:27.450 06:28:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:34.025 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:34.025 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:34.025 Found net devices under 0000:af:00.0: cvl_0_0 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.025 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:34.026 Found net devices under 0000:af:00.1: cvl_0_1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:34.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:22:34.026 00:22:34.026 --- 10.0.0.2 ping statistics --- 00:22:34.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.026 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:22:34.026 00:22:34.026 --- 10.0.0.1 ping statistics --- 00:22:34.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.026 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1014287 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1014287 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1014287 ']' 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:34.026 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.026 [2024-12-13 06:28:24.983315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:34.026 [2024-12-13 06:28:24.983360] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.026 [2024-12-13 06:28:25.063085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.026 [2024-12-13 06:28:25.084463] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.026 [2024-12-13 06:28:25.084499] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.026 [2024-12-13 06:28:25.084506] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.026 [2024-12-13 06:28:25.084512] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.026 [2024-12-13 06:28:25.084517] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.026 [2024-12-13 06:28:25.084998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:34.026 true 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:34.026 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:34.285 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.285 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:34.285 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:34.285 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:34.285 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:34.544 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.544 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:34.803 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:34.803 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:34.803 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.803 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:35.061 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:35.062 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:35.062 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:35.062 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.062 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:35.320 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:35.320 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:35.320 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:35.579 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.579 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.SS10HDVRyj 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.wU90XNfpTa 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.SS10HDVRyj 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.wU90XNfpTa 00:22:35.838 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:36.098 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:36.356 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.SS10HDVRyj 00:22:36.356 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.SS10HDVRyj 00:22:36.356 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:36.356 [2024-12-13 06:28:27.939277] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.356 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:36.615 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:36.873 [2024-12-13 06:28:28.312224] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:36.873 [2024-12-13 06:28:28.312439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:36.874 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:36.874 malloc0 00:22:36.874 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:37.132 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.SS10HDVRyj 00:22:37.439 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:37.789 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.SS10HDVRyj 00:22:47.832 Initializing NVMe Controllers 00:22:47.832 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.832 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.832 Initialization complete. Launching workers. 00:22:47.832 ======================================================== 00:22:47.832 Latency(us) 00:22:47.832 Device Information : IOPS MiB/s Average min max 00:22:47.832 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16928.98 66.13 3780.62 788.92 6545.04 00:22:47.832 ======================================================== 00:22:47.832 Total : 16928.98 66.13 3780.62 788.92 6545.04 00:22:47.832 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.SS10HDVRyj 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SS10HDVRyj 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1016743 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1016743 /var/tmp/bdevperf.sock 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1016743 ']' 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:47.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:47.832 [2024-12-13 06:28:39.234484] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:47.832 [2024-12-13 06:28:39.234530] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1016743 ] 00:22:47.832 [2024-12-13 06:28:39.308414] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.832 [2024-12-13 06:28:39.329986] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:47.832 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SS10HDVRyj 00:22:48.091 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:48.349 [2024-12-13 06:28:39.776901] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:48.349 TLSTESTn1 00:22:48.349 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:48.349 Running I/O for 10 seconds... 00:22:50.658 5276.00 IOPS, 20.61 MiB/s [2024-12-13T05:28:43.247Z] 5439.00 IOPS, 21.25 MiB/s [2024-12-13T05:28:44.181Z] 5469.67 IOPS, 21.37 MiB/s [2024-12-13T05:28:45.115Z] 5518.00 IOPS, 21.55 MiB/s [2024-12-13T05:28:46.047Z] 5537.40 IOPS, 21.63 MiB/s [2024-12-13T05:28:46.981Z] 5556.33 IOPS, 21.70 MiB/s [2024-12-13T05:28:48.355Z] 5563.29 IOPS, 21.73 MiB/s [2024-12-13T05:28:49.290Z] 5562.75 IOPS, 21.73 MiB/s [2024-12-13T05:28:50.225Z] 5550.11 IOPS, 21.68 MiB/s [2024-12-13T05:28:50.225Z] 5555.70 IOPS, 21.70 MiB/s 00:22:58.571 Latency(us) 00:22:58.571 [2024-12-13T05:28:50.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.571 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:58.571 Verification LBA range: start 0x0 length 0x2000 00:22:58.571 TLSTESTn1 : 10.01 5560.61 21.72 0.00 0.00 22985.92 4868.39 30708.30 00:22:58.571 [2024-12-13T05:28:50.225Z] =================================================================================================================== 00:22:58.571 [2024-12-13T05:28:50.225Z] Total : 5560.61 21.72 0.00 0.00 22985.92 4868.39 30708.30 00:22:58.571 { 00:22:58.571 "results": [ 00:22:58.571 { 00:22:58.571 "job": "TLSTESTn1", 00:22:58.571 "core_mask": "0x4", 00:22:58.571 "workload": "verify", 00:22:58.571 "status": "finished", 00:22:58.571 "verify_range": { 00:22:58.571 "start": 0, 00:22:58.571 "length": 8192 00:22:58.571 }, 00:22:58.571 "queue_depth": 128, 00:22:58.571 "io_size": 4096, 00:22:58.571 "runtime": 10.014007, 00:22:58.571 "iops": 5560.611251819576, 00:22:58.571 "mibps": 21.72113770242022, 00:22:58.571 "io_failed": 0, 00:22:58.571 "io_timeout": 0, 00:22:58.571 "avg_latency_us": 22985.917054398804, 00:22:58.571 "min_latency_us": 4868.388571428572, 00:22:58.571 "max_latency_us": 30708.297142857144 00:22:58.571 } 00:22:58.571 ], 00:22:58.571 "core_count": 1 00:22:58.571 } 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1016743 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1016743 ']' 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1016743 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1016743 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1016743' 00:22:58.571 killing process with pid 1016743 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1016743 00:22:58.571 Received shutdown signal, test time was about 10.000000 seconds 00:22:58.571 00:22:58.571 Latency(us) 00:22:58.571 [2024-12-13T05:28:50.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:58.571 [2024-12-13T05:28:50.225Z] =================================================================================================================== 00:22:58.571 [2024-12-13T05:28:50.225Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1016743 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wU90XNfpTa 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wU90XNfpTa 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.wU90XNfpTa 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.wU90XNfpTa 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018389 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018389 /var/tmp/bdevperf.sock 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018389 ']' 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.571 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:58.830 [2024-12-13 06:28:50.267540] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:58.830 [2024-12-13 06:28:50.267588] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018389 ] 00:22:58.830 [2024-12-13 06:28:50.344466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.830 [2024-12-13 06:28:50.365157] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:58.830 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:58.830 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:58.830 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.wU90XNfpTa 00:22:59.088 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:59.346 [2024-12-13 06:28:50.832240] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:59.346 [2024-12-13 06:28:50.838523] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:22:59.346 [2024-12-13 06:28:50.839418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa980c0 (107): Transport endpoint is not connected 00:22:59.346 [2024-12-13 06:28:50.840412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa980c0 (9): Bad file descriptor 00:22:59.346 [2024-12-13 06:28:50.841413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:22:59.346 [2024-12-13 06:28:50.841422] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:22:59.346 [2024-12-13 06:28:50.841430] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:22:59.346 [2024-12-13 06:28:50.841438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:22:59.346 request: 00:22:59.346 { 00:22:59.346 "name": "TLSTEST", 00:22:59.346 "trtype": "tcp", 00:22:59.346 "traddr": "10.0.0.2", 00:22:59.346 "adrfam": "ipv4", 00:22:59.346 "trsvcid": "4420", 00:22:59.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:59.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:59.346 "prchk_reftag": false, 00:22:59.346 "prchk_guard": false, 00:22:59.346 "hdgst": false, 00:22:59.346 "ddgst": false, 00:22:59.346 "psk": "key0", 00:22:59.346 "allow_unrecognized_csi": false, 00:22:59.346 "method": "bdev_nvme_attach_controller", 00:22:59.346 "req_id": 1 00:22:59.346 } 00:22:59.346 Got JSON-RPC error response 00:22:59.346 response: 00:22:59.346 { 00:22:59.346 "code": -5, 00:22:59.346 "message": "Input/output error" 00:22:59.346 } 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018389 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018389 ']' 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018389 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018389 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018389' 00:22:59.346 killing process with pid 1018389 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018389 00:22:59.346 Received shutdown signal, test time was about 10.000000 seconds 00:22:59.346 00:22:59.346 Latency(us) 00:22:59.346 [2024-12-13T05:28:51.000Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:59.346 [2024-12-13T05:28:51.000Z] =================================================================================================================== 00:22:59.346 [2024-12-13T05:28:51.000Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:59.346 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018389 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SS10HDVRyj 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SS10HDVRyj 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.SS10HDVRyj 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SS10HDVRyj 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018551 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018551 /var/tmp/bdevperf.sock 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018551 ']' 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:59.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:59.605 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:59.605 [2024-12-13 06:28:51.118282] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:22:59.605 [2024-12-13 06:28:51.118331] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018551 ] 00:22:59.605 [2024-12-13 06:28:51.193090] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:59.605 [2024-12-13 06:28:51.212527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.864 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.864 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:59.864 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SS10HDVRyj 00:22:59.864 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:00.122 [2024-12-13 06:28:51.687029] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.122 [2024-12-13 06:28:51.691411] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:00.122 [2024-12-13 06:28:51.691431] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:00.122 [2024-12-13 06:28:51.691461] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.122 [2024-12-13 06:28:51.692191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd810c0 (107): Transport endpoint is not connected 00:23:00.122 [2024-12-13 06:28:51.693183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd810c0 (9): Bad file descriptor 00:23:00.122 [2024-12-13 06:28:51.694184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:00.122 [2024-12-13 06:28:51.694194] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.122 [2024-12-13 06:28:51.694201] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:00.122 [2024-12-13 06:28:51.694209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:00.122 request: 00:23:00.122 { 00:23:00.122 "name": "TLSTEST", 00:23:00.122 "trtype": "tcp", 00:23:00.122 "traddr": "10.0.0.2", 00:23:00.122 "adrfam": "ipv4", 00:23:00.122 "trsvcid": "4420", 00:23:00.122 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.122 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.122 "prchk_reftag": false, 00:23:00.122 "prchk_guard": false, 00:23:00.122 "hdgst": false, 00:23:00.122 "ddgst": false, 00:23:00.122 "psk": "key0", 00:23:00.122 "allow_unrecognized_csi": false, 00:23:00.122 "method": "bdev_nvme_attach_controller", 00:23:00.122 "req_id": 1 00:23:00.122 } 00:23:00.122 Got JSON-RPC error response 00:23:00.122 response: 00:23:00.122 { 00:23:00.122 "code": -5, 00:23:00.122 "message": "Input/output error" 00:23:00.122 } 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018551 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018551 ']' 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018551 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018551 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018551' 00:23:00.122 killing process with pid 1018551 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018551 00:23:00.122 Received shutdown signal, test time was about 10.000000 seconds 00:23:00.122 00:23:00.122 Latency(us) 00:23:00.122 [2024-12-13T05:28:51.776Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.122 [2024-12-13T05:28:51.776Z] =================================================================================================================== 00:23:00.122 [2024-12-13T05:28:51.776Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:00.122 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018551 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SS10HDVRyj 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SS10HDVRyj 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.SS10HDVRyj 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.SS10HDVRyj 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018772 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018772 /var/tmp/bdevperf.sock 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018772 ']' 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:00.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:00.381 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:00.381 [2024-12-13 06:28:51.950122] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:00.381 [2024-12-13 06:28:51.950170] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018772 ] 00:23:00.381 [2024-12-13 06:28:52.023214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.639 [2024-12-13 06:28:52.043607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:00.639 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:00.639 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:00.639 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.SS10HDVRyj 00:23:00.897 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:00.897 [2024-12-13 06:28:52.498169] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:00.897 [2024-12-13 06:28:52.502721] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.897 [2024-12-13 06:28:52.502741] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:00.897 [2024-12-13 06:28:52.502764] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:00.898 [2024-12-13 06:28:52.503437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21630c0 (107): Transport endpoint is not connected 00:23:00.898 [2024-12-13 06:28:52.504430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21630c0 (9): Bad file descriptor 00:23:00.898 [2024-12-13 06:28:52.505431] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:00.898 [2024-12-13 06:28:52.505439] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:00.898 [2024-12-13 06:28:52.505450] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:00.898 [2024-12-13 06:28:52.505458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:00.898 request: 00:23:00.898 { 00:23:00.898 "name": "TLSTEST", 00:23:00.898 "trtype": "tcp", 00:23:00.898 "traddr": "10.0.0.2", 00:23:00.898 "adrfam": "ipv4", 00:23:00.898 "trsvcid": "4420", 00:23:00.898 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.898 "prchk_reftag": false, 00:23:00.898 "prchk_guard": false, 00:23:00.898 "hdgst": false, 00:23:00.898 "ddgst": false, 00:23:00.898 "psk": "key0", 00:23:00.898 "allow_unrecognized_csi": false, 00:23:00.898 "method": "bdev_nvme_attach_controller", 00:23:00.898 "req_id": 1 00:23:00.898 } 00:23:00.898 Got JSON-RPC error response 00:23:00.898 response: 00:23:00.898 { 00:23:00.898 "code": -5, 00:23:00.898 "message": "Input/output error" 00:23:00.898 } 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018772 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018772 ']' 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018772 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:00.898 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018772 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018772' 00:23:01.156 killing process with pid 1018772 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018772 00:23:01.156 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.156 00:23:01.156 Latency(us) 00:23:01.156 [2024-12-13T05:28:52.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.156 [2024-12-13T05:28:52.810Z] =================================================================================================================== 00:23:01.156 [2024-12-13T05:28:52.810Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018772 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:01.156 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1018790 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1018790 /var/tmp/bdevperf.sock 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1018790 ']' 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.157 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.157 [2024-12-13 06:28:52.756128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:01.157 [2024-12-13 06:28:52.756175] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1018790 ] 00:23:01.415 [2024-12-13 06:28:52.824998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.415 [2024-12-13 06:28:52.844390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.415 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:01.415 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:01.415 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:01.674 [2024-12-13 06:28:53.114825] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:01.674 [2024-12-13 06:28:53.114856] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:01.674 request: 00:23:01.674 { 00:23:01.674 "name": "key0", 00:23:01.674 "path": "", 00:23:01.674 "method": "keyring_file_add_key", 00:23:01.674 "req_id": 1 00:23:01.674 } 00:23:01.674 Got JSON-RPC error response 00:23:01.674 response: 00:23:01.674 { 00:23:01.674 "code": -1, 00:23:01.674 "message": "Operation not permitted" 00:23:01.674 } 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:01.674 [2024-12-13 06:28:53.287355] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.674 [2024-12-13 06:28:53.287390] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:01.674 request: 00:23:01.674 { 00:23:01.674 "name": "TLSTEST", 00:23:01.674 "trtype": "tcp", 00:23:01.674 "traddr": "10.0.0.2", 00:23:01.674 "adrfam": "ipv4", 00:23:01.674 "trsvcid": "4420", 00:23:01.674 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.674 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.674 "prchk_reftag": false, 00:23:01.674 "prchk_guard": false, 00:23:01.674 "hdgst": false, 00:23:01.674 "ddgst": false, 00:23:01.674 "psk": "key0", 00:23:01.674 "allow_unrecognized_csi": false, 00:23:01.674 "method": "bdev_nvme_attach_controller", 00:23:01.674 "req_id": 1 00:23:01.674 } 00:23:01.674 Got JSON-RPC error response 00:23:01.674 response: 00:23:01.674 { 00:23:01.674 "code": -126, 00:23:01.674 "message": "Required key not available" 00:23:01.674 } 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1018790 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1018790 ']' 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1018790 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.674 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1018790 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1018790' 00:23:01.933 killing process with pid 1018790 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1018790 00:23:01.933 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.933 00:23:01.933 Latency(us) 00:23:01.933 [2024-12-13T05:28:53.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.933 [2024-12-13T05:28:53.587Z] =================================================================================================================== 00:23:01.933 [2024-12-13T05:28:53.587Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1018790 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1014287 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1014287 ']' 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1014287 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1014287 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1014287' 00:23:01.933 killing process with pid 1014287 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1014287 00:23:01.933 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1014287 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.Va3na0apAG 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.Va3na0apAG 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1019029 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1019029 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019029 ']' 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:02.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:02.192 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.192 [2024-12-13 06:28:53.807432] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:02.192 [2024-12-13 06:28:53.807488] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:02.451 [2024-12-13 06:28:53.885704] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.451 [2024-12-13 06:28:53.906691] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:02.451 [2024-12-13 06:28:53.906726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:02.451 [2024-12-13 06:28:53.906733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:02.451 [2024-12-13 06:28:53.906739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:02.451 [2024-12-13 06:28:53.906744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:02.451 [2024-12-13 06:28:53.907226] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:02.451 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.451 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:02.451 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:02.451 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:02.451 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.451 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:02.451 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:02.451 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Va3na0apAG 00:23:02.451 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:02.709 [2024-12-13 06:28:54.202309] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:02.709 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:02.968 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:03.226 [2024-12-13 06:28:54.623397] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:03.226 [2024-12-13 06:28:54.623594] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:03.226 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:03.226 malloc0 00:23:03.226 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:03.484 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:03.742 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Va3na0apAG 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Va3na0apAG 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1019283 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1019283 /var/tmp/bdevperf.sock 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1019283 ']' 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.000 [2024-12-13 06:28:55.437591] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:04.000 [2024-12-13 06:28:55.437649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1019283 ] 00:23:04.000 [2024-12-13 06:28:55.504875] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.000 [2024-12-13 06:28:55.526610] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.000 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:04.258 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.516 [2024-12-13 06:28:55.997206] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:04.516 TLSTESTn1 00:23:04.516 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:04.516 Running I/O for 10 seconds... 00:23:06.826 5096.00 IOPS, 19.91 MiB/s [2024-12-13T05:28:59.415Z] 5315.50 IOPS, 20.76 MiB/s [2024-12-13T05:29:00.351Z] 5390.00 IOPS, 21.05 MiB/s [2024-12-13T05:29:01.297Z] 5450.25 IOPS, 21.29 MiB/s [2024-12-13T05:29:02.233Z] 5435.20 IOPS, 21.23 MiB/s [2024-12-13T05:29:03.609Z] 5444.17 IOPS, 21.27 MiB/s [2024-12-13T05:29:04.546Z] 5458.29 IOPS, 21.32 MiB/s [2024-12-13T05:29:05.481Z] 5454.00 IOPS, 21.30 MiB/s [2024-12-13T05:29:06.416Z] 5469.67 IOPS, 21.37 MiB/s [2024-12-13T05:29:06.416Z] 5483.50 IOPS, 21.42 MiB/s 00:23:14.762 Latency(us) 00:23:14.762 [2024-12-13T05:29:06.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.762 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:14.762 Verification LBA range: start 0x0 length 0x2000 00:23:14.762 TLSTESTn1 : 10.02 5487.19 21.43 0.00 0.00 23291.91 4743.56 75397.61 00:23:14.762 [2024-12-13T05:29:06.416Z] =================================================================================================================== 00:23:14.762 [2024-12-13T05:29:06.416Z] Total : 5487.19 21.43 0.00 0.00 23291.91 4743.56 75397.61 00:23:14.762 { 00:23:14.762 "results": [ 00:23:14.762 { 00:23:14.762 "job": "TLSTESTn1", 00:23:14.762 "core_mask": "0x4", 00:23:14.762 "workload": "verify", 00:23:14.762 "status": "finished", 00:23:14.762 "verify_range": { 00:23:14.762 "start": 0, 00:23:14.762 "length": 8192 00:23:14.762 }, 00:23:14.762 "queue_depth": 128, 00:23:14.762 "io_size": 4096, 00:23:14.762 "runtime": 10.016429, 00:23:14.762 "iops": 5487.185103593307, 00:23:14.762 "mibps": 21.434316810911355, 00:23:14.762 "io_failed": 0, 00:23:14.762 "io_timeout": 0, 00:23:14.762 "avg_latency_us": 23291.908567269853, 00:23:14.762 "min_latency_us": 4743.558095238095, 00:23:14.762 "max_latency_us": 75397.60761904762 00:23:14.762 } 00:23:14.762 ], 00:23:14.762 "core_count": 1 00:23:14.762 } 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1019283 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019283 ']' 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019283 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019283 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019283' 00:23:14.762 killing process with pid 1019283 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019283 00:23:14.762 Received shutdown signal, test time was about 10.000000 seconds 00:23:14.762 00:23:14.762 Latency(us) 00:23:14.762 [2024-12-13T05:29:06.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:14.762 [2024-12-13T05:29:06.416Z] =================================================================================================================== 00:23:14.762 [2024-12-13T05:29:06.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:14.762 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019283 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.Va3na0apAG 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Va3na0apAG 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Va3na0apAG 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Va3na0apAG 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Va3na0apAG 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1021065 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1021065 /var/tmp/bdevperf.sock 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021065 ']' 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.021 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.021 [2024-12-13 06:29:06.494079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:15.021 [2024-12-13 06:29:06.494129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021065 ] 00:23:15.021 [2024-12-13 06:29:06.570552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.021 [2024-12-13 06:29:06.590198] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.280 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.280 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.280 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:15.280 [2024-12-13 06:29:06.852199] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Va3na0apAG': 0100666 00:23:15.280 [2024-12-13 06:29:06.852226] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:15.280 request: 00:23:15.280 { 00:23:15.280 "name": "key0", 00:23:15.280 "path": "/tmp/tmp.Va3na0apAG", 00:23:15.280 "method": "keyring_file_add_key", 00:23:15.280 "req_id": 1 00:23:15.280 } 00:23:15.280 Got JSON-RPC error response 00:23:15.280 response: 00:23:15.280 { 00:23:15.280 "code": -1, 00:23:15.280 "message": "Operation not permitted" 00:23:15.280 } 00:23:15.280 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:15.540 [2024-12-13 06:29:07.048781] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:15.540 [2024-12-13 06:29:07.048808] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:15.540 request: 00:23:15.540 { 00:23:15.540 "name": "TLSTEST", 00:23:15.540 "trtype": "tcp", 00:23:15.540 "traddr": "10.0.0.2", 00:23:15.540 "adrfam": "ipv4", 00:23:15.540 "trsvcid": "4420", 00:23:15.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.540 "prchk_reftag": false, 00:23:15.540 "prchk_guard": false, 00:23:15.540 "hdgst": false, 00:23:15.540 "ddgst": false, 00:23:15.540 "psk": "key0", 00:23:15.540 "allow_unrecognized_csi": false, 00:23:15.540 "method": "bdev_nvme_attach_controller", 00:23:15.540 "req_id": 1 00:23:15.540 } 00:23:15.540 Got JSON-RPC error response 00:23:15.540 response: 00:23:15.540 { 00:23:15.540 "code": -126, 00:23:15.540 "message": "Required key not available" 00:23:15.540 } 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1021065 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021065 ']' 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021065 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021065 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021065' 00:23:15.540 killing process with pid 1021065 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021065 00:23:15.540 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.540 00:23:15.540 Latency(us) 00:23:15.540 [2024-12-13T05:29:07.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.540 [2024-12-13T05:29:07.194Z] =================================================================================================================== 00:23:15.540 [2024-12-13T05:29:07.194Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:15.540 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021065 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1019029 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1019029 ']' 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1019029 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1019029 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1019029' 00:23:15.799 killing process with pid 1019029 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1019029 00:23:15.799 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1019029 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021300 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021300 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021300 ']' 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:16.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.058 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.058 [2024-12-13 06:29:07.543500] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:16.058 [2024-12-13 06:29:07.543549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:16.058 [2024-12-13 06:29:07.620855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.058 [2024-12-13 06:29:07.638777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:16.059 [2024-12-13 06:29:07.638811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:16.059 [2024-12-13 06:29:07.638819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:16.059 [2024-12-13 06:29:07.638825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:16.059 [2024-12-13 06:29:07.638831] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:16.059 [2024-12-13 06:29:07.639308] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:16.317 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.318 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:16.318 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.318 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:16.318 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Va3na0apAG 00:23:16.318 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:16.318 [2024-12-13 06:29:07.945491] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.577 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:16.577 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:16.835 [2024-12-13 06:29:08.330471] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:16.835 [2024-12-13 06:29:08.330672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.835 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:17.094 malloc0 00:23:17.094 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:17.353 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:17.353 [2024-12-13 06:29:08.915905] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.Va3na0apAG': 0100666 00:23:17.353 [2024-12-13 06:29:08.915933] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:17.353 request: 00:23:17.353 { 00:23:17.353 "name": "key0", 00:23:17.353 "path": "/tmp/tmp.Va3na0apAG", 00:23:17.353 "method": "keyring_file_add_key", 00:23:17.353 "req_id": 1 00:23:17.353 } 00:23:17.353 Got JSON-RPC error response 00:23:17.353 response: 00:23:17.353 { 00:23:17.353 "code": -1, 00:23:17.353 "message": "Operation not permitted" 00:23:17.353 } 00:23:17.353 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.612 [2024-12-13 06:29:09.100396] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:17.612 [2024-12-13 06:29:09.100428] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:17.612 request: 00:23:17.612 { 00:23:17.612 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.612 "host": "nqn.2016-06.io.spdk:host1", 00:23:17.612 "psk": "key0", 00:23:17.612 "method": "nvmf_subsystem_add_host", 00:23:17.612 "req_id": 1 00:23:17.612 } 00:23:17.612 Got JSON-RPC error response 00:23:17.612 response: 00:23:17.612 { 00:23:17.612 "code": -32603, 00:23:17.612 "message": "Internal error" 00:23:17.612 } 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1021300 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021300 ']' 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021300 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021300 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021300' 00:23:17.612 killing process with pid 1021300 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021300 00:23:17.612 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021300 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.Va3na0apAG 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1021559 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1021559 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021559 ']' 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.871 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.871 [2024-12-13 06:29:09.383557] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:17.871 [2024-12-13 06:29:09.383602] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.872 [2024-12-13 06:29:09.459617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.872 [2024-12-13 06:29:09.478211] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.872 [2024-12-13 06:29:09.478246] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.872 [2024-12-13 06:29:09.478253] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.872 [2024-12-13 06:29:09.478258] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.872 [2024-12-13 06:29:09.478263] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.872 [2024-12-13 06:29:09.478737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:18.130 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Va3na0apAG 00:23:18.131 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.131 [2024-12-13 06:29:09.777078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.389 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:18.389 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:18.648 [2024-12-13 06:29:10.190189] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:18.648 [2024-12-13 06:29:10.190397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.648 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:18.907 malloc0 00:23:18.907 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:19.166 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:19.166 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1021810 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1021810 /var/tmp/bdevperf.sock 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1021810 ']' 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:19.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.424 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.424 [2024-12-13 06:29:11.025275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:19.424 [2024-12-13 06:29:11.025325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021810 ] 00:23:19.683 [2024-12-13 06:29:11.098149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.683 [2024-12-13 06:29:11.119946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:19.683 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.683 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.683 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:19.942 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.201 [2024-12-13 06:29:11.598710] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:20.201 TLSTESTn1 00:23:20.201 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:20.460 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:20.460 "subsystems": [ 00:23:20.460 { 00:23:20.460 "subsystem": "keyring", 00:23:20.460 "config": [ 00:23:20.460 { 00:23:20.460 "method": "keyring_file_add_key", 00:23:20.460 "params": { 00:23:20.460 "name": "key0", 00:23:20.460 "path": "/tmp/tmp.Va3na0apAG" 00:23:20.460 } 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "subsystem": "iobuf", 00:23:20.460 "config": [ 00:23:20.460 { 00:23:20.460 "method": "iobuf_set_options", 00:23:20.460 "params": { 00:23:20.460 "small_pool_count": 8192, 00:23:20.460 "large_pool_count": 1024, 00:23:20.460 "small_bufsize": 8192, 00:23:20.460 "large_bufsize": 135168, 00:23:20.460 "enable_numa": false 00:23:20.460 } 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "subsystem": "sock", 00:23:20.460 "config": [ 00:23:20.460 { 00:23:20.460 "method": "sock_set_default_impl", 00:23:20.460 "params": { 00:23:20.460 "impl_name": "posix" 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "sock_impl_set_options", 00:23:20.460 "params": { 00:23:20.460 "impl_name": "ssl", 00:23:20.460 "recv_buf_size": 4096, 00:23:20.460 "send_buf_size": 4096, 00:23:20.460 "enable_recv_pipe": true, 00:23:20.460 "enable_quickack": false, 00:23:20.460 "enable_placement_id": 0, 00:23:20.460 "enable_zerocopy_send_server": true, 00:23:20.460 "enable_zerocopy_send_client": false, 00:23:20.460 "zerocopy_threshold": 0, 00:23:20.460 "tls_version": 0, 00:23:20.460 "enable_ktls": false 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "sock_impl_set_options", 00:23:20.460 "params": { 00:23:20.460 "impl_name": "posix", 00:23:20.460 "recv_buf_size": 2097152, 00:23:20.460 "send_buf_size": 2097152, 00:23:20.460 "enable_recv_pipe": true, 00:23:20.460 "enable_quickack": false, 00:23:20.460 "enable_placement_id": 0, 00:23:20.460 "enable_zerocopy_send_server": true, 00:23:20.460 "enable_zerocopy_send_client": false, 00:23:20.460 "zerocopy_threshold": 0, 00:23:20.460 "tls_version": 0, 00:23:20.460 "enable_ktls": false 00:23:20.460 } 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "subsystem": "vmd", 00:23:20.460 "config": [] 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "subsystem": "accel", 00:23:20.460 "config": [ 00:23:20.460 { 00:23:20.460 "method": "accel_set_options", 00:23:20.460 "params": { 00:23:20.460 "small_cache_size": 128, 00:23:20.460 "large_cache_size": 16, 00:23:20.460 "task_count": 2048, 00:23:20.460 "sequence_count": 2048, 00:23:20.460 "buf_count": 2048 00:23:20.460 } 00:23:20.460 } 00:23:20.460 ] 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "subsystem": "bdev", 00:23:20.460 "config": [ 00:23:20.460 { 00:23:20.460 "method": "bdev_set_options", 00:23:20.460 "params": { 00:23:20.460 "bdev_io_pool_size": 65535, 00:23:20.460 "bdev_io_cache_size": 256, 00:23:20.460 "bdev_auto_examine": true, 00:23:20.460 "iobuf_small_cache_size": 128, 00:23:20.460 "iobuf_large_cache_size": 16 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "bdev_raid_set_options", 00:23:20.460 "params": { 00:23:20.460 "process_window_size_kb": 1024, 00:23:20.460 "process_max_bandwidth_mb_sec": 0 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "bdev_iscsi_set_options", 00:23:20.460 "params": { 00:23:20.460 "timeout_sec": 30 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "bdev_nvme_set_options", 00:23:20.460 "params": { 00:23:20.460 "action_on_timeout": "none", 00:23:20.460 "timeout_us": 0, 00:23:20.460 "timeout_admin_us": 0, 00:23:20.460 "keep_alive_timeout_ms": 10000, 00:23:20.460 "arbitration_burst": 0, 00:23:20.460 "low_priority_weight": 0, 00:23:20.460 "medium_priority_weight": 0, 00:23:20.460 "high_priority_weight": 0, 00:23:20.460 "nvme_adminq_poll_period_us": 10000, 00:23:20.460 "nvme_ioq_poll_period_us": 0, 00:23:20.460 "io_queue_requests": 0, 00:23:20.460 "delay_cmd_submit": true, 00:23:20.460 "transport_retry_count": 4, 00:23:20.460 "bdev_retry_count": 3, 00:23:20.460 "transport_ack_timeout": 0, 00:23:20.460 "ctrlr_loss_timeout_sec": 0, 00:23:20.460 "reconnect_delay_sec": 0, 00:23:20.460 "fast_io_fail_timeout_sec": 0, 00:23:20.460 "disable_auto_failback": false, 00:23:20.460 "generate_uuids": false, 00:23:20.460 "transport_tos": 0, 00:23:20.460 "nvme_error_stat": false, 00:23:20.460 "rdma_srq_size": 0, 00:23:20.460 "io_path_stat": false, 00:23:20.460 "allow_accel_sequence": false, 00:23:20.460 "rdma_max_cq_size": 0, 00:23:20.460 "rdma_cm_event_timeout_ms": 0, 00:23:20.460 "dhchap_digests": [ 00:23:20.460 "sha256", 00:23:20.460 "sha384", 00:23:20.460 "sha512" 00:23:20.460 ], 00:23:20.460 "dhchap_dhgroups": [ 00:23:20.460 "null", 00:23:20.460 "ffdhe2048", 00:23:20.460 "ffdhe3072", 00:23:20.460 "ffdhe4096", 00:23:20.460 "ffdhe6144", 00:23:20.460 "ffdhe8192" 00:23:20.460 ], 00:23:20.460 "rdma_umr_per_io": false 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "bdev_nvme_set_hotplug", 00:23:20.460 "params": { 00:23:20.460 "period_us": 100000, 00:23:20.460 "enable": false 00:23:20.460 } 00:23:20.460 }, 00:23:20.460 { 00:23:20.460 "method": "bdev_malloc_create", 00:23:20.460 "params": { 00:23:20.460 "name": "malloc0", 00:23:20.460 "num_blocks": 8192, 00:23:20.460 "block_size": 4096, 00:23:20.461 "physical_block_size": 4096, 00:23:20.461 "uuid": "faa0889d-6337-4a1f-8a00-520772b239e7", 00:23:20.461 "optimal_io_boundary": 0, 00:23:20.461 "md_size": 0, 00:23:20.461 "dif_type": 0, 00:23:20.461 "dif_is_head_of_md": false, 00:23:20.461 "dif_pi_format": 0 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "bdev_wait_for_examine" 00:23:20.461 } 00:23:20.461 ] 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "subsystem": "nbd", 00:23:20.461 "config": [] 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "subsystem": "scheduler", 00:23:20.461 "config": [ 00:23:20.461 { 00:23:20.461 "method": "framework_set_scheduler", 00:23:20.461 "params": { 00:23:20.461 "name": "static" 00:23:20.461 } 00:23:20.461 } 00:23:20.461 ] 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "subsystem": "nvmf", 00:23:20.461 "config": [ 00:23:20.461 { 00:23:20.461 "method": "nvmf_set_config", 00:23:20.461 "params": { 00:23:20.461 "discovery_filter": "match_any", 00:23:20.461 "admin_cmd_passthru": { 00:23:20.461 "identify_ctrlr": false 00:23:20.461 }, 00:23:20.461 "dhchap_digests": [ 00:23:20.461 "sha256", 00:23:20.461 "sha384", 00:23:20.461 "sha512" 00:23:20.461 ], 00:23:20.461 "dhchap_dhgroups": [ 00:23:20.461 "null", 00:23:20.461 "ffdhe2048", 00:23:20.461 "ffdhe3072", 00:23:20.461 "ffdhe4096", 00:23:20.461 "ffdhe6144", 00:23:20.461 "ffdhe8192" 00:23:20.461 ] 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_set_max_subsystems", 00:23:20.461 "params": { 00:23:20.461 "max_subsystems": 1024 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_set_crdt", 00:23:20.461 "params": { 00:23:20.461 "crdt1": 0, 00:23:20.461 "crdt2": 0, 00:23:20.461 "crdt3": 0 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_create_transport", 00:23:20.461 "params": { 00:23:20.461 "trtype": "TCP", 00:23:20.461 "max_queue_depth": 128, 00:23:20.461 "max_io_qpairs_per_ctrlr": 127, 00:23:20.461 "in_capsule_data_size": 4096, 00:23:20.461 "max_io_size": 131072, 00:23:20.461 "io_unit_size": 131072, 00:23:20.461 "max_aq_depth": 128, 00:23:20.461 "num_shared_buffers": 511, 00:23:20.461 "buf_cache_size": 4294967295, 00:23:20.461 "dif_insert_or_strip": false, 00:23:20.461 "zcopy": false, 00:23:20.461 "c2h_success": false, 00:23:20.461 "sock_priority": 0, 00:23:20.461 "abort_timeout_sec": 1, 00:23:20.461 "ack_timeout": 0, 00:23:20.461 "data_wr_pool_size": 0 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_create_subsystem", 00:23:20.461 "params": { 00:23:20.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.461 "allow_any_host": false, 00:23:20.461 "serial_number": "SPDK00000000000001", 00:23:20.461 "model_number": "SPDK bdev Controller", 00:23:20.461 "max_namespaces": 10, 00:23:20.461 "min_cntlid": 1, 00:23:20.461 "max_cntlid": 65519, 00:23:20.461 "ana_reporting": false 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_subsystem_add_host", 00:23:20.461 "params": { 00:23:20.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.461 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.461 "psk": "key0" 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_subsystem_add_ns", 00:23:20.461 "params": { 00:23:20.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.461 "namespace": { 00:23:20.461 "nsid": 1, 00:23:20.461 "bdev_name": "malloc0", 00:23:20.461 "nguid": "FAA0889D63374A1F8A00520772B239E7", 00:23:20.461 "uuid": "faa0889d-6337-4a1f-8a00-520772b239e7", 00:23:20.461 "no_auto_visible": false 00:23:20.461 } 00:23:20.461 } 00:23:20.461 }, 00:23:20.461 { 00:23:20.461 "method": "nvmf_subsystem_add_listener", 00:23:20.461 "params": { 00:23:20.461 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.461 "listen_address": { 00:23:20.461 "trtype": "TCP", 00:23:20.461 "adrfam": "IPv4", 00:23:20.461 "traddr": "10.0.0.2", 00:23:20.461 "trsvcid": "4420" 00:23:20.461 }, 00:23:20.461 "secure_channel": true 00:23:20.461 } 00:23:20.461 } 00:23:20.461 ] 00:23:20.461 } 00:23:20.461 ] 00:23:20.461 }' 00:23:20.461 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:20.720 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:20.720 "subsystems": [ 00:23:20.720 { 00:23:20.720 "subsystem": "keyring", 00:23:20.720 "config": [ 00:23:20.720 { 00:23:20.720 "method": "keyring_file_add_key", 00:23:20.720 "params": { 00:23:20.720 "name": "key0", 00:23:20.720 "path": "/tmp/tmp.Va3na0apAG" 00:23:20.720 } 00:23:20.720 } 00:23:20.720 ] 00:23:20.720 }, 00:23:20.721 { 00:23:20.721 "subsystem": "iobuf", 00:23:20.721 "config": [ 00:23:20.721 { 00:23:20.721 "method": "iobuf_set_options", 00:23:20.721 "params": { 00:23:20.721 "small_pool_count": 8192, 00:23:20.721 "large_pool_count": 1024, 00:23:20.721 "small_bufsize": 8192, 00:23:20.721 "large_bufsize": 135168, 00:23:20.721 "enable_numa": false 00:23:20.721 } 00:23:20.721 } 00:23:20.721 ] 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "subsystem": "sock", 00:23:20.721 "config": [ 00:23:20.721 { 00:23:20.721 "method": "sock_set_default_impl", 00:23:20.721 "params": { 00:23:20.721 "impl_name": "posix" 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "sock_impl_set_options", 00:23:20.721 "params": { 00:23:20.721 "impl_name": "ssl", 00:23:20.721 "recv_buf_size": 4096, 00:23:20.721 "send_buf_size": 4096, 00:23:20.721 "enable_recv_pipe": true, 00:23:20.721 "enable_quickack": false, 00:23:20.721 "enable_placement_id": 0, 00:23:20.721 "enable_zerocopy_send_server": true, 00:23:20.721 "enable_zerocopy_send_client": false, 00:23:20.721 "zerocopy_threshold": 0, 00:23:20.721 "tls_version": 0, 00:23:20.721 "enable_ktls": false 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "sock_impl_set_options", 00:23:20.721 "params": { 00:23:20.721 "impl_name": "posix", 00:23:20.721 "recv_buf_size": 2097152, 00:23:20.721 "send_buf_size": 2097152, 00:23:20.721 "enable_recv_pipe": true, 00:23:20.721 "enable_quickack": false, 00:23:20.721 "enable_placement_id": 0, 00:23:20.721 "enable_zerocopy_send_server": true, 00:23:20.721 "enable_zerocopy_send_client": false, 00:23:20.721 "zerocopy_threshold": 0, 00:23:20.721 "tls_version": 0, 00:23:20.721 "enable_ktls": false 00:23:20.721 } 00:23:20.721 } 00:23:20.721 ] 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "subsystem": "vmd", 00:23:20.721 "config": [] 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "subsystem": "accel", 00:23:20.721 "config": [ 00:23:20.721 { 00:23:20.721 "method": "accel_set_options", 00:23:20.721 "params": { 00:23:20.721 "small_cache_size": 128, 00:23:20.721 "large_cache_size": 16, 00:23:20.721 "task_count": 2048, 00:23:20.721 "sequence_count": 2048, 00:23:20.721 "buf_count": 2048 00:23:20.721 } 00:23:20.721 } 00:23:20.721 ] 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "subsystem": "bdev", 00:23:20.721 "config": [ 00:23:20.721 { 00:23:20.721 "method": "bdev_set_options", 00:23:20.721 "params": { 00:23:20.721 "bdev_io_pool_size": 65535, 00:23:20.721 "bdev_io_cache_size": 256, 00:23:20.721 "bdev_auto_examine": true, 00:23:20.721 "iobuf_small_cache_size": 128, 00:23:20.721 "iobuf_large_cache_size": 16 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_raid_set_options", 00:23:20.721 "params": { 00:23:20.721 "process_window_size_kb": 1024, 00:23:20.721 "process_max_bandwidth_mb_sec": 0 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_iscsi_set_options", 00:23:20.721 "params": { 00:23:20.721 "timeout_sec": 30 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_nvme_set_options", 00:23:20.721 "params": { 00:23:20.721 "action_on_timeout": "none", 00:23:20.721 "timeout_us": 0, 00:23:20.721 "timeout_admin_us": 0, 00:23:20.721 "keep_alive_timeout_ms": 10000, 00:23:20.721 "arbitration_burst": 0, 00:23:20.721 "low_priority_weight": 0, 00:23:20.721 "medium_priority_weight": 0, 00:23:20.721 "high_priority_weight": 0, 00:23:20.721 "nvme_adminq_poll_period_us": 10000, 00:23:20.721 "nvme_ioq_poll_period_us": 0, 00:23:20.721 "io_queue_requests": 512, 00:23:20.721 "delay_cmd_submit": true, 00:23:20.721 "transport_retry_count": 4, 00:23:20.721 "bdev_retry_count": 3, 00:23:20.721 "transport_ack_timeout": 0, 00:23:20.721 "ctrlr_loss_timeout_sec": 0, 00:23:20.721 "reconnect_delay_sec": 0, 00:23:20.721 "fast_io_fail_timeout_sec": 0, 00:23:20.721 "disable_auto_failback": false, 00:23:20.721 "generate_uuids": false, 00:23:20.721 "transport_tos": 0, 00:23:20.721 "nvme_error_stat": false, 00:23:20.721 "rdma_srq_size": 0, 00:23:20.721 "io_path_stat": false, 00:23:20.721 "allow_accel_sequence": false, 00:23:20.721 "rdma_max_cq_size": 0, 00:23:20.721 "rdma_cm_event_timeout_ms": 0, 00:23:20.721 "dhchap_digests": [ 00:23:20.721 "sha256", 00:23:20.721 "sha384", 00:23:20.721 "sha512" 00:23:20.721 ], 00:23:20.721 "dhchap_dhgroups": [ 00:23:20.721 "null", 00:23:20.721 "ffdhe2048", 00:23:20.721 "ffdhe3072", 00:23:20.721 "ffdhe4096", 00:23:20.721 "ffdhe6144", 00:23:20.721 "ffdhe8192" 00:23:20.721 ], 00:23:20.721 "rdma_umr_per_io": false 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_nvme_attach_controller", 00:23:20.721 "params": { 00:23:20.721 "name": "TLSTEST", 00:23:20.721 "trtype": "TCP", 00:23:20.721 "adrfam": "IPv4", 00:23:20.721 "traddr": "10.0.0.2", 00:23:20.721 "trsvcid": "4420", 00:23:20.721 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.721 "prchk_reftag": false, 00:23:20.721 "prchk_guard": false, 00:23:20.721 "ctrlr_loss_timeout_sec": 0, 00:23:20.721 "reconnect_delay_sec": 0, 00:23:20.721 "fast_io_fail_timeout_sec": 0, 00:23:20.721 "psk": "key0", 00:23:20.721 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:20.721 "hdgst": false, 00:23:20.721 "ddgst": false, 00:23:20.721 "multipath": "multipath" 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_nvme_set_hotplug", 00:23:20.721 "params": { 00:23:20.721 "period_us": 100000, 00:23:20.721 "enable": false 00:23:20.721 } 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "method": "bdev_wait_for_examine" 00:23:20.721 } 00:23:20.721 ] 00:23:20.721 }, 00:23:20.721 { 00:23:20.721 "subsystem": "nbd", 00:23:20.721 "config": [] 00:23:20.721 } 00:23:20.721 ] 00:23:20.721 }' 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1021810 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021810 ']' 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021810 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021810 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021810' 00:23:20.721 killing process with pid 1021810 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021810 00:23:20.721 Received shutdown signal, test time was about 10.000000 seconds 00:23:20.721 00:23:20.721 Latency(us) 00:23:20.721 [2024-12-13T05:29:12.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:20.721 [2024-12-13T05:29:12.375Z] =================================================================================================================== 00:23:20.721 [2024-12-13T05:29:12.375Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:20.721 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021810 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1021559 ']' 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1021559' 00:23:20.981 killing process with pid 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1021559 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:20.981 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:20.981 "subsystems": [ 00:23:20.981 { 00:23:20.981 "subsystem": "keyring", 00:23:20.981 "config": [ 00:23:20.981 { 00:23:20.981 "method": "keyring_file_add_key", 00:23:20.981 "params": { 00:23:20.981 "name": "key0", 00:23:20.981 "path": "/tmp/tmp.Va3na0apAG" 00:23:20.981 } 00:23:20.981 } 00:23:20.981 ] 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "subsystem": "iobuf", 00:23:20.981 "config": [ 00:23:20.981 { 00:23:20.981 "method": "iobuf_set_options", 00:23:20.981 "params": { 00:23:20.981 "small_pool_count": 8192, 00:23:20.981 "large_pool_count": 1024, 00:23:20.981 "small_bufsize": 8192, 00:23:20.981 "large_bufsize": 135168, 00:23:20.981 "enable_numa": false 00:23:20.981 } 00:23:20.981 } 00:23:20.981 ] 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "subsystem": "sock", 00:23:20.981 "config": [ 00:23:20.981 { 00:23:20.981 "method": "sock_set_default_impl", 00:23:20.981 "params": { 00:23:20.981 "impl_name": "posix" 00:23:20.981 } 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "method": "sock_impl_set_options", 00:23:20.981 "params": { 00:23:20.981 "impl_name": "ssl", 00:23:20.981 "recv_buf_size": 4096, 00:23:20.981 "send_buf_size": 4096, 00:23:20.981 "enable_recv_pipe": true, 00:23:20.981 "enable_quickack": false, 00:23:20.981 "enable_placement_id": 0, 00:23:20.981 "enable_zerocopy_send_server": true, 00:23:20.981 "enable_zerocopy_send_client": false, 00:23:20.981 "zerocopy_threshold": 0, 00:23:20.981 "tls_version": 0, 00:23:20.981 "enable_ktls": false 00:23:20.981 } 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "method": "sock_impl_set_options", 00:23:20.981 "params": { 00:23:20.981 "impl_name": "posix", 00:23:20.981 "recv_buf_size": 2097152, 00:23:20.981 "send_buf_size": 2097152, 00:23:20.981 "enable_recv_pipe": true, 00:23:20.981 "enable_quickack": false, 00:23:20.981 "enable_placement_id": 0, 00:23:20.981 "enable_zerocopy_send_server": true, 00:23:20.981 "enable_zerocopy_send_client": false, 00:23:20.981 "zerocopy_threshold": 0, 00:23:20.981 "tls_version": 0, 00:23:20.981 "enable_ktls": false 00:23:20.981 } 00:23:20.981 } 00:23:20.981 ] 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "subsystem": "vmd", 00:23:20.981 "config": [] 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "subsystem": "accel", 00:23:20.981 "config": [ 00:23:20.981 { 00:23:20.981 "method": "accel_set_options", 00:23:20.981 "params": { 00:23:20.981 "small_cache_size": 128, 00:23:20.981 "large_cache_size": 16, 00:23:20.981 "task_count": 2048, 00:23:20.981 "sequence_count": 2048, 00:23:20.981 "buf_count": 2048 00:23:20.981 } 00:23:20.981 } 00:23:20.981 ] 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "subsystem": "bdev", 00:23:20.981 "config": [ 00:23:20.981 { 00:23:20.981 "method": "bdev_set_options", 00:23:20.981 "params": { 00:23:20.981 "bdev_io_pool_size": 65535, 00:23:20.981 "bdev_io_cache_size": 256, 00:23:20.981 "bdev_auto_examine": true, 00:23:20.981 "iobuf_small_cache_size": 128, 00:23:20.981 "iobuf_large_cache_size": 16 00:23:20.981 } 00:23:20.981 }, 00:23:20.981 { 00:23:20.981 "method": "bdev_raid_set_options", 00:23:20.981 "params": { 00:23:20.982 "process_window_size_kb": 1024, 00:23:20.982 "process_max_bandwidth_mb_sec": 0 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "bdev_iscsi_set_options", 00:23:20.982 "params": { 00:23:20.982 "timeout_sec": 30 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "bdev_nvme_set_options", 00:23:20.982 "params": { 00:23:20.982 "action_on_timeout": "none", 00:23:20.982 "timeout_us": 0, 00:23:20.982 "timeout_admin_us": 0, 00:23:20.982 "keep_alive_timeout_ms": 10000, 00:23:20.982 "arbitration_burst": 0, 00:23:20.982 "low_priority_weight": 0, 00:23:20.982 "medium_priority_weight": 0, 00:23:20.982 "high_priority_weight": 0, 00:23:20.982 "nvme_adminq_poll_period_us": 10000, 00:23:20.982 "nvme_ioq_poll_period_us": 0, 00:23:20.982 "io_queue_requests": 0, 00:23:20.982 "delay_cmd_submit": true, 00:23:20.982 "transport_retry_count": 4, 00:23:20.982 "bdev_retry_count": 3, 00:23:20.982 "transport_ack_timeout": 0, 00:23:20.982 "ctrlr_loss_timeout_sec": 0, 00:23:20.982 "reconnect_delay_sec": 0, 00:23:20.982 "fast_io_fail_timeout_sec": 0, 00:23:20.982 "disable_auto_failback": false, 00:23:20.982 "generate_uuids": false, 00:23:20.982 "transport_tos": 0, 00:23:20.982 "nvme_error_stat": false, 00:23:20.982 "rdma_srq_size": 0, 00:23:20.982 "io_path_stat": false, 00:23:20.982 "allow_accel_sequence": false, 00:23:20.982 "rdma_max_cq_size": 0, 00:23:20.982 "rdma_cm_event_timeout_ms": 0, 00:23:20.982 "dhchap_digests": [ 00:23:20.982 "sha256", 00:23:20.982 "sha384", 00:23:20.982 "sha512" 00:23:20.982 ], 00:23:20.982 "dhchap_dhgroups": [ 00:23:20.982 "null", 00:23:20.982 "ffdhe2048", 00:23:20.982 "ffdhe3072", 00:23:20.982 "ffdhe4096", 00:23:20.982 "ffdhe6144", 00:23:20.982 "ffdhe8192" 00:23:20.982 ], 00:23:20.982 "rdma_umr_per_io": false 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "bdev_nvme_set_hotplug", 00:23:20.982 "params": { 00:23:20.982 "period_us": 100000, 00:23:20.982 "enable": false 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "bdev_malloc_create", 00:23:20.982 "params": { 00:23:20.982 "name": "malloc0", 00:23:20.982 "num_blocks": 8192, 00:23:20.982 "block_size": 4096, 00:23:20.982 "physical_block_size": 4096, 00:23:20.982 "uuid": "faa0889d-6337-4a1f-8a00-520772b239e7", 00:23:20.982 "optimal_io_boundary": 0, 00:23:20.982 "md_size": 0, 00:23:20.982 "dif_type": 0, 00:23:20.982 "dif_is_head_of_md": false, 00:23:20.982 "dif_pi_format": 0 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "bdev_wait_for_examine" 00:23:20.982 } 00:23:20.982 ] 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "subsystem": "nbd", 00:23:20.982 "config": [] 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "subsystem": "scheduler", 00:23:20.982 "config": [ 00:23:20.982 { 00:23:20.982 "method": "framework_set_scheduler", 00:23:20.982 "params": { 00:23:20.982 "name": "static" 00:23:20.982 } 00:23:20.982 } 00:23:20.982 ] 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "subsystem": "nvmf", 00:23:20.982 "config": [ 00:23:20.982 { 00:23:20.982 "method": "nvmf_set_config", 00:23:20.982 "params": { 00:23:20.982 "discovery_filter": "match_any", 00:23:20.982 "admin_cmd_passthru": { 00:23:20.982 "identify_ctrlr": false 00:23:20.982 }, 00:23:20.982 "dhchap_digests": [ 00:23:20.982 "sha256", 00:23:20.982 "sha384", 00:23:20.982 "sha512" 00:23:20.982 ], 00:23:20.982 "dhchap_dhgroups": [ 00:23:20.982 "null", 00:23:20.982 "ffdhe2048", 00:23:20.982 "ffdhe3072", 00:23:20.982 "ffdhe4096", 00:23:20.982 "ffdhe6144", 00:23:20.982 "ffdhe8192" 00:23:20.982 ] 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_set_max_subsystems", 00:23:20.982 "params": { 00:23:20.982 "max_subsystems": 1024 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_set_crdt", 00:23:20.982 "params": { 00:23:20.982 "crdt1": 0, 00:23:20.982 "crdt2": 0, 00:23:20.982 "crdt3": 0 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_create_transport", 00:23:20.982 "params": { 00:23:20.982 "trtype": "TCP", 00:23:20.982 "max_queue_depth": 128, 00:23:20.982 "max_io_qpairs_per_ctrlr": 127, 00:23:20.982 "in_capsule_data_size": 4096, 00:23:20.982 "max_io_size": 131072, 00:23:20.982 "io_unit_size": 131072, 00:23:20.982 "max_aq_depth": 128, 00:23:20.982 "num_shared_buffers": 511, 00:23:20.982 "buf_cache_size": 4294967295, 00:23:20.982 "dif_insert_or_strip": false, 00:23:20.982 "zcopy": false, 00:23:20.982 "c2h_success": false, 00:23:20.982 "sock_priority": 0, 00:23:20.982 "abort_timeout_sec": 1, 00:23:20.982 "ack_timeout": 0, 00:23:20.982 "data_wr_pool_size": 0 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_create_subsystem", 00:23:20.982 "params": { 00:23:20.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.982 "allow_any_host": false, 00:23:20.982 "serial_number": "SPDK00000000000001", 00:23:20.982 "model_number": "SPDK bdev Controller", 00:23:20.982 "max_namespaces": 10, 00:23:20.982 "min_cntlid": 1, 00:23:20.982 "max_cntlid": 65519, 00:23:20.982 "ana_reporting": false 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_subsystem_add_host", 00:23:20.982 "params": { 00:23:20.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.982 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.982 "psk": "key0" 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_subsystem_add_ns", 00:23:20.982 "params": { 00:23:20.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.982 "namespace": { 00:23:20.982 "nsid": 1, 00:23:20.982 "bdev_name": "malloc0", 00:23:20.982 "nguid": "FAA0889D63374A1F8A00520772B239E7", 00:23:20.982 "uuid": "faa0889d-6337-4a1f-8a00-520772b239e7", 00:23:20.982 "no_auto_visible": false 00:23:20.982 } 00:23:20.982 } 00:23:20.982 }, 00:23:20.982 { 00:23:20.982 "method": "nvmf_subsystem_add_listener", 00:23:20.982 "params": { 00:23:20.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.982 "listen_address": { 00:23:20.982 "trtype": "TCP", 00:23:20.982 "adrfam": "IPv4", 00:23:20.982 "traddr": "10.0.0.2", 00:23:20.982 "trsvcid": "4420" 00:23:20.982 }, 00:23:20.982 "secure_channel": true 00:23:20.982 } 00:23:20.982 } 00:23:20.982 ] 00:23:20.982 } 00:23:20.982 ] 00:23:20.982 }' 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1022117 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1022117 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022117 ']' 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.982 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:21.241 [2024-12-13 06:29:12.680695] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:21.241 [2024-12-13 06:29:12.680740] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:21.242 [2024-12-13 06:29:12.759650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:21.242 [2024-12-13 06:29:12.780274] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:21.242 [2024-12-13 06:29:12.780311] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:21.242 [2024-12-13 06:29:12.780318] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:21.242 [2024-12-13 06:29:12.780324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:21.242 [2024-12-13 06:29:12.780329] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:21.242 [2024-12-13 06:29:12.780832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:21.500 [2024-12-13 06:29:12.987406] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.500 [2024-12-13 06:29:13.019431] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.500 [2024-12-13 06:29:13.019621] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1022297 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1022297 /var/tmp/bdevperf.sock 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1022297 ']' 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.069 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:22.069 "subsystems": [ 00:23:22.069 { 00:23:22.069 "subsystem": "keyring", 00:23:22.069 "config": [ 00:23:22.069 { 00:23:22.069 "method": "keyring_file_add_key", 00:23:22.069 "params": { 00:23:22.069 "name": "key0", 00:23:22.069 "path": "/tmp/tmp.Va3na0apAG" 00:23:22.069 } 00:23:22.069 } 00:23:22.069 ] 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "subsystem": "iobuf", 00:23:22.069 "config": [ 00:23:22.069 { 00:23:22.069 "method": "iobuf_set_options", 00:23:22.069 "params": { 00:23:22.069 "small_pool_count": 8192, 00:23:22.069 "large_pool_count": 1024, 00:23:22.069 "small_bufsize": 8192, 00:23:22.069 "large_bufsize": 135168, 00:23:22.069 "enable_numa": false 00:23:22.069 } 00:23:22.069 } 00:23:22.069 ] 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "subsystem": "sock", 00:23:22.069 "config": [ 00:23:22.069 { 00:23:22.069 "method": "sock_set_default_impl", 00:23:22.069 "params": { 00:23:22.069 "impl_name": "posix" 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "sock_impl_set_options", 00:23:22.069 "params": { 00:23:22.069 "impl_name": "ssl", 00:23:22.069 "recv_buf_size": 4096, 00:23:22.069 "send_buf_size": 4096, 00:23:22.069 "enable_recv_pipe": true, 00:23:22.069 "enable_quickack": false, 00:23:22.069 "enable_placement_id": 0, 00:23:22.069 "enable_zerocopy_send_server": true, 00:23:22.069 "enable_zerocopy_send_client": false, 00:23:22.069 "zerocopy_threshold": 0, 00:23:22.069 "tls_version": 0, 00:23:22.069 "enable_ktls": false 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "sock_impl_set_options", 00:23:22.069 "params": { 00:23:22.069 "impl_name": "posix", 00:23:22.069 "recv_buf_size": 2097152, 00:23:22.069 "send_buf_size": 2097152, 00:23:22.069 "enable_recv_pipe": true, 00:23:22.069 "enable_quickack": false, 00:23:22.069 "enable_placement_id": 0, 00:23:22.069 "enable_zerocopy_send_server": true, 00:23:22.069 "enable_zerocopy_send_client": false, 00:23:22.069 "zerocopy_threshold": 0, 00:23:22.069 "tls_version": 0, 00:23:22.069 "enable_ktls": false 00:23:22.069 } 00:23:22.069 } 00:23:22.069 ] 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "subsystem": "vmd", 00:23:22.069 "config": [] 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "subsystem": "accel", 00:23:22.069 "config": [ 00:23:22.069 { 00:23:22.069 "method": "accel_set_options", 00:23:22.069 "params": { 00:23:22.069 "small_cache_size": 128, 00:23:22.069 "large_cache_size": 16, 00:23:22.069 "task_count": 2048, 00:23:22.069 "sequence_count": 2048, 00:23:22.069 "buf_count": 2048 00:23:22.069 } 00:23:22.069 } 00:23:22.069 ] 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "subsystem": "bdev", 00:23:22.069 "config": [ 00:23:22.069 { 00:23:22.069 "method": "bdev_set_options", 00:23:22.069 "params": { 00:23:22.069 "bdev_io_pool_size": 65535, 00:23:22.069 "bdev_io_cache_size": 256, 00:23:22.069 "bdev_auto_examine": true, 00:23:22.069 "iobuf_small_cache_size": 128, 00:23:22.069 "iobuf_large_cache_size": 16 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "bdev_raid_set_options", 00:23:22.069 "params": { 00:23:22.069 "process_window_size_kb": 1024, 00:23:22.069 "process_max_bandwidth_mb_sec": 0 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "bdev_iscsi_set_options", 00:23:22.069 "params": { 00:23:22.069 "timeout_sec": 30 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "bdev_nvme_set_options", 00:23:22.069 "params": { 00:23:22.069 "action_on_timeout": "none", 00:23:22.069 "timeout_us": 0, 00:23:22.069 "timeout_admin_us": 0, 00:23:22.069 "keep_alive_timeout_ms": 10000, 00:23:22.069 "arbitration_burst": 0, 00:23:22.069 "low_priority_weight": 0, 00:23:22.069 "medium_priority_weight": 0, 00:23:22.069 "high_priority_weight": 0, 00:23:22.069 "nvme_adminq_poll_period_us": 10000, 00:23:22.069 "nvme_ioq_poll_period_us": 0, 00:23:22.069 "io_queue_requests": 512, 00:23:22.069 "delay_cmd_submit": true, 00:23:22.069 "transport_retry_count": 4, 00:23:22.069 "bdev_retry_count": 3, 00:23:22.069 "transport_ack_timeout": 0, 00:23:22.069 "ctrlr_loss_timeout_sec": 0, 00:23:22.069 "reconnect_delay_sec": 0, 00:23:22.069 "fast_io_fail_timeout_sec": 0, 00:23:22.069 "disable_auto_failback": false, 00:23:22.069 "generate_uuids": false, 00:23:22.069 "transport_tos": 0, 00:23:22.069 "nvme_error_stat": false, 00:23:22.069 "rdma_srq_size": 0, 00:23:22.069 "io_path_stat": false, 00:23:22.069 "allow_accel_sequence": false, 00:23:22.069 "rdma_max_cq_size": 0, 00:23:22.069 "rdma_cm_event_timeout_ms": 0, 00:23:22.069 "dhchap_digests": [ 00:23:22.069 "sha256", 00:23:22.069 "sha384", 00:23:22.069 "sha512" 00:23:22.069 ], 00:23:22.069 "dhchap_dhgroups": [ 00:23:22.069 "null", 00:23:22.069 "ffdhe2048", 00:23:22.069 "ffdhe3072", 00:23:22.069 "ffdhe4096", 00:23:22.069 "ffdhe6144", 00:23:22.069 "ffdhe8192" 00:23:22.069 ], 00:23:22.069 "rdma_umr_per_io": false 00:23:22.069 } 00:23:22.069 }, 00:23:22.069 { 00:23:22.069 "method": "bdev_nvme_attach_controller", 00:23:22.069 "params": { 00:23:22.069 "name": "TLSTEST", 00:23:22.069 "trtype": "TCP", 00:23:22.069 "adrfam": "IPv4", 00:23:22.069 "traddr": "10.0.0.2", 00:23:22.069 "trsvcid": "4420", 00:23:22.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:22.069 "prchk_reftag": false, 00:23:22.069 "prchk_guard": false, 00:23:22.069 "ctrlr_loss_timeout_sec": 0, 00:23:22.069 "reconnect_delay_sec": 0, 00:23:22.069 "fast_io_fail_timeout_sec": 0, 00:23:22.070 "psk": "key0", 00:23:22.070 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:22.070 "hdgst": false, 00:23:22.070 "ddgst": false, 00:23:22.070 "multipath": "multipath" 00:23:22.070 } 00:23:22.070 }, 00:23:22.070 { 00:23:22.070 "method": "bdev_nvme_set_hotplug", 00:23:22.070 "params": { 00:23:22.070 "period_us": 100000, 00:23:22.070 "enable": false 00:23:22.070 } 00:23:22.070 }, 00:23:22.070 { 00:23:22.070 "method": "bdev_wait_for_examine" 00:23:22.070 } 00:23:22.070 ] 00:23:22.070 }, 00:23:22.070 { 00:23:22.070 "subsystem": "nbd", 00:23:22.070 "config": [] 00:23:22.070 } 00:23:22.070 ] 00:23:22.070 }' 00:23:22.070 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:22.070 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.070 [2024-12-13 06:29:13.594478] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:22.070 [2024-12-13 06:29:13.594526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1022297 ] 00:23:22.070 [2024-12-13 06:29:13.669673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.070 [2024-12-13 06:29:13.691259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.329 [2024-12-13 06:29:13.839611] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.896 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:22.896 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:22.896 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:22.896 Running I/O for 10 seconds... 00:23:25.209 5502.00 IOPS, 21.49 MiB/s [2024-12-13T05:29:18.000Z] 5493.50 IOPS, 21.46 MiB/s [2024-12-13T05:29:18.567Z] 5533.00 IOPS, 21.61 MiB/s [2024-12-13T05:29:19.942Z] 5553.75 IOPS, 21.69 MiB/s [2024-12-13T05:29:20.877Z] 5563.40 IOPS, 21.73 MiB/s [2024-12-13T05:29:21.811Z] 5556.00 IOPS, 21.70 MiB/s [2024-12-13T05:29:22.745Z] 5565.29 IOPS, 21.74 MiB/s [2024-12-13T05:29:23.679Z] 5563.00 IOPS, 21.73 MiB/s [2024-12-13T05:29:24.613Z] 5569.56 IOPS, 21.76 MiB/s [2024-12-13T05:29:24.613Z] 5547.10 IOPS, 21.67 MiB/s 00:23:32.959 Latency(us) 00:23:32.959 [2024-12-13T05:29:24.613Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.959 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:32.959 Verification LBA range: start 0x0 length 0x2000 00:23:32.959 TLSTESTn1 : 10.02 5550.34 21.68 0.00 0.00 23026.59 4712.35 20472.20 00:23:32.959 [2024-12-13T05:29:24.613Z] =================================================================================================================== 00:23:32.959 [2024-12-13T05:29:24.613Z] Total : 5550.34 21.68 0.00 0.00 23026.59 4712.35 20472.20 00:23:32.959 { 00:23:32.959 "results": [ 00:23:32.959 { 00:23:32.959 "job": "TLSTESTn1", 00:23:32.959 "core_mask": "0x4", 00:23:32.959 "workload": "verify", 00:23:32.959 "status": "finished", 00:23:32.959 "verify_range": { 00:23:32.959 "start": 0, 00:23:32.959 "length": 8192 00:23:32.959 }, 00:23:32.959 "queue_depth": 128, 00:23:32.959 "io_size": 4096, 00:23:32.959 "runtime": 10.017035, 00:23:32.959 "iops": 5550.344987314111, 00:23:32.959 "mibps": 21.681035106695745, 00:23:32.959 "io_failed": 0, 00:23:32.959 "io_timeout": 0, 00:23:32.959 "avg_latency_us": 23026.590660369762, 00:23:32.959 "min_latency_us": 4712.350476190476, 00:23:32.959 "max_latency_us": 20472.198095238095 00:23:32.959 } 00:23:32.959 ], 00:23:32.959 "core_count": 1 00:23:32.959 } 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1022297 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022297 ']' 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022297 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.959 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022297 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022297' 00:23:33.218 killing process with pid 1022297 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022297 00:23:33.218 Received shutdown signal, test time was about 10.000000 seconds 00:23:33.218 00:23:33.218 Latency(us) 00:23:33.218 [2024-12-13T05:29:24.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.218 [2024-12-13T05:29:24.872Z] =================================================================================================================== 00:23:33.218 [2024-12-13T05:29:24.872Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022297 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1022117 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1022117 ']' 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1022117 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1022117 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1022117' 00:23:33.218 killing process with pid 1022117 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1022117 00:23:33.218 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1022117 00:23:33.477 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:33.477 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:33.477 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:33.477 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024094 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024094 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024094 ']' 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:33.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:33.477 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.477 [2024-12-13 06:29:25.056116] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:33.477 [2024-12-13 06:29:25.056163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:33.735 [2024-12-13 06:29:25.135936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.735 [2024-12-13 06:29:25.155504] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.735 [2024-12-13 06:29:25.155540] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.735 [2024-12-13 06:29:25.155547] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.735 [2024-12-13 06:29:25.155553] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.735 [2024-12-13 06:29:25.155558] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.735 [2024-12-13 06:29:25.156075] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.Va3na0apAG 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Va3na0apAG 00:23:33.735 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.993 [2024-12-13 06:29:25.466929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.993 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:34.252 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:34.252 [2024-12-13 06:29:25.811790] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:34.252 [2024-12-13 06:29:25.812005] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:34.252 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:34.510 malloc0 00:23:34.510 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:34.769 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:34.769 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1024342 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1024342 /var/tmp/bdevperf.sock 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024342 ']' 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.028 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.028 [2024-12-13 06:29:26.613774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:35.028 [2024-12-13 06:29:26.613821] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024342 ] 00:23:35.287 [2024-12-13 06:29:26.691639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:35.287 [2024-12-13 06:29:26.713702] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:35.287 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:35.288 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:35.288 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:35.546 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:35.546 [2024-12-13 06:29:27.140385] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:35.805 nvme0n1 00:23:35.805 06:29:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:35.805 Running I/O for 1 seconds... 00:23:36.741 5520.00 IOPS, 21.56 MiB/s 00:23:36.741 Latency(us) 00:23:36.741 [2024-12-13T05:29:28.395Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.741 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:36.741 Verification LBA range: start 0x0 length 0x2000 00:23:36.741 nvme0n1 : 1.02 5554.65 21.70 0.00 0.00 22864.36 6272.73 22219.82 00:23:36.741 [2024-12-13T05:29:28.395Z] =================================================================================================================== 00:23:36.741 [2024-12-13T05:29:28.395Z] Total : 5554.65 21.70 0.00 0.00 22864.36 6272.73 22219.82 00:23:36.741 { 00:23:36.741 "results": [ 00:23:36.741 { 00:23:36.741 "job": "nvme0n1", 00:23:36.741 "core_mask": "0x2", 00:23:36.741 "workload": "verify", 00:23:36.741 "status": "finished", 00:23:36.741 "verify_range": { 00:23:36.741 "start": 0, 00:23:36.741 "length": 8192 00:23:36.741 }, 00:23:36.741 "queue_depth": 128, 00:23:36.741 "io_size": 4096, 00:23:36.741 "runtime": 1.016805, 00:23:36.741 "iops": 5554.654038876677, 00:23:36.741 "mibps": 21.69786733936202, 00:23:36.741 "io_failed": 0, 00:23:36.741 "io_timeout": 0, 00:23:36.741 "avg_latency_us": 22864.357685147712, 00:23:36.741 "min_latency_us": 6272.731428571428, 00:23:36.741 "max_latency_us": 22219.82476190476 00:23:36.741 } 00:23:36.741 ], 00:23:36.741 "core_count": 1 00:23:36.741 } 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1024342 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024342 ']' 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024342 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.742 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024342 00:23:37.000 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.000 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.000 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024342' 00:23:37.000 killing process with pid 1024342 00:23:37.000 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024342 00:23:37.000 Received shutdown signal, test time was about 1.000000 seconds 00:23:37.000 00:23:37.000 Latency(us) 00:23:37.000 [2024-12-13T05:29:28.654Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.000 [2024-12-13T05:29:28.655Z] =================================================================================================================== 00:23:37.001 [2024-12-13T05:29:28.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024342 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1024094 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024094 ']' 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024094 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024094 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024094' 00:23:37.001 killing process with pid 1024094 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024094 00:23:37.001 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024094 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1024794 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1024794 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024794 ']' 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.260 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.260 [2024-12-13 06:29:28.824386] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:37.260 [2024-12-13 06:29:28.824430] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.260 [2024-12-13 06:29:28.901691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.519 [2024-12-13 06:29:28.923138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.519 [2024-12-13 06:29:28.923170] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.519 [2024-12-13 06:29:28.923177] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.519 [2024-12-13 06:29:28.923183] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.519 [2024-12-13 06:29:28.923188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.519 [2024-12-13 06:29:28.923679] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.519 [2024-12-13 06:29:29.054814] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.519 malloc0 00:23:37.519 [2024-12-13 06:29:29.082832] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.519 [2024-12-13 06:29:29.083034] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1024820 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1024820 /var/tmp/bdevperf.sock 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1024820 ']' 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:37.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.519 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.519 [2024-12-13 06:29:29.156461] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:37.519 [2024-12-13 06:29:29.156499] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1024820 ] 00:23:37.778 [2024-12-13 06:29:29.230940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.778 [2024-12-13 06:29:29.252721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:37.778 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:37.778 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:37.778 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Va3na0apAG 00:23:38.037 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:38.295 [2024-12-13 06:29:29.727758] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:38.295 nvme0n1 00:23:38.295 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:38.295 Running I/O for 1 seconds... 00:23:39.673 5073.00 IOPS, 19.82 MiB/s 00:23:39.673 Latency(us) 00:23:39.673 [2024-12-13T05:29:31.327Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.673 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:39.673 Verification LBA range: start 0x0 length 0x2000 00:23:39.673 nvme0n1 : 1.01 5137.79 20.07 0.00 0.00 24748.06 4712.35 46936.26 00:23:39.673 [2024-12-13T05:29:31.327Z] =================================================================================================================== 00:23:39.673 [2024-12-13T05:29:31.327Z] Total : 5137.79 20.07 0.00 0.00 24748.06 4712.35 46936.26 00:23:39.673 { 00:23:39.673 "results": [ 00:23:39.673 { 00:23:39.673 "job": "nvme0n1", 00:23:39.673 "core_mask": "0x2", 00:23:39.673 "workload": "verify", 00:23:39.673 "status": "finished", 00:23:39.673 "verify_range": { 00:23:39.673 "start": 0, 00:23:39.673 "length": 8192 00:23:39.673 }, 00:23:39.673 "queue_depth": 128, 00:23:39.673 "io_size": 4096, 00:23:39.673 "runtime": 1.012302, 00:23:39.673 "iops": 5137.7948477825785, 00:23:39.673 "mibps": 20.069511124150697, 00:23:39.673 "io_failed": 0, 00:23:39.673 "io_timeout": 0, 00:23:39.673 "avg_latency_us": 24748.058926396938, 00:23:39.673 "min_latency_us": 4712.350476190476, 00:23:39.673 "max_latency_us": 46936.259047619045 00:23:39.673 } 00:23:39.673 ], 00:23:39.673 "core_count": 1 00:23:39.673 } 00:23:39.673 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:39.673 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:39.673 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:39.673 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:39.673 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:39.673 "subsystems": [ 00:23:39.673 { 00:23:39.673 "subsystem": "keyring", 00:23:39.673 "config": [ 00:23:39.673 { 00:23:39.673 "method": "keyring_file_add_key", 00:23:39.673 "params": { 00:23:39.673 "name": "key0", 00:23:39.673 "path": "/tmp/tmp.Va3na0apAG" 00:23:39.673 } 00:23:39.673 } 00:23:39.673 ] 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "subsystem": "iobuf", 00:23:39.673 "config": [ 00:23:39.673 { 00:23:39.673 "method": "iobuf_set_options", 00:23:39.673 "params": { 00:23:39.673 "small_pool_count": 8192, 00:23:39.673 "large_pool_count": 1024, 00:23:39.673 "small_bufsize": 8192, 00:23:39.673 "large_bufsize": 135168, 00:23:39.673 "enable_numa": false 00:23:39.673 } 00:23:39.673 } 00:23:39.673 ] 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "subsystem": "sock", 00:23:39.673 "config": [ 00:23:39.673 { 00:23:39.673 "method": "sock_set_default_impl", 00:23:39.673 "params": { 00:23:39.673 "impl_name": "posix" 00:23:39.673 } 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "method": "sock_impl_set_options", 00:23:39.673 "params": { 00:23:39.673 "impl_name": "ssl", 00:23:39.673 "recv_buf_size": 4096, 00:23:39.673 "send_buf_size": 4096, 00:23:39.673 "enable_recv_pipe": true, 00:23:39.673 "enable_quickack": false, 00:23:39.673 "enable_placement_id": 0, 00:23:39.673 "enable_zerocopy_send_server": true, 00:23:39.673 "enable_zerocopy_send_client": false, 00:23:39.673 "zerocopy_threshold": 0, 00:23:39.673 "tls_version": 0, 00:23:39.673 "enable_ktls": false 00:23:39.673 } 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "method": "sock_impl_set_options", 00:23:39.673 "params": { 00:23:39.673 "impl_name": "posix", 00:23:39.673 "recv_buf_size": 2097152, 00:23:39.673 "send_buf_size": 2097152, 00:23:39.673 "enable_recv_pipe": true, 00:23:39.673 "enable_quickack": false, 00:23:39.673 "enable_placement_id": 0, 00:23:39.673 "enable_zerocopy_send_server": true, 00:23:39.673 "enable_zerocopy_send_client": false, 00:23:39.673 "zerocopy_threshold": 0, 00:23:39.673 "tls_version": 0, 00:23:39.673 "enable_ktls": false 00:23:39.673 } 00:23:39.673 } 00:23:39.673 ] 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "subsystem": "vmd", 00:23:39.673 "config": [] 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "subsystem": "accel", 00:23:39.673 "config": [ 00:23:39.673 { 00:23:39.673 "method": "accel_set_options", 00:23:39.673 "params": { 00:23:39.673 "small_cache_size": 128, 00:23:39.673 "large_cache_size": 16, 00:23:39.673 "task_count": 2048, 00:23:39.673 "sequence_count": 2048, 00:23:39.673 "buf_count": 2048 00:23:39.673 } 00:23:39.673 } 00:23:39.673 ] 00:23:39.673 }, 00:23:39.673 { 00:23:39.673 "subsystem": "bdev", 00:23:39.673 "config": [ 00:23:39.673 { 00:23:39.673 "method": "bdev_set_options", 00:23:39.673 "params": { 00:23:39.673 "bdev_io_pool_size": 65535, 00:23:39.673 "bdev_io_cache_size": 256, 00:23:39.673 "bdev_auto_examine": true, 00:23:39.673 "iobuf_small_cache_size": 128, 00:23:39.673 "iobuf_large_cache_size": 16 00:23:39.673 } 00:23:39.673 }, 00:23:39.673 { 00:23:39.674 "method": "bdev_raid_set_options", 00:23:39.674 "params": { 00:23:39.674 "process_window_size_kb": 1024, 00:23:39.674 "process_max_bandwidth_mb_sec": 0 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "bdev_iscsi_set_options", 00:23:39.674 "params": { 00:23:39.674 "timeout_sec": 30 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "bdev_nvme_set_options", 00:23:39.674 "params": { 00:23:39.674 "action_on_timeout": "none", 00:23:39.674 "timeout_us": 0, 00:23:39.674 "timeout_admin_us": 0, 00:23:39.674 "keep_alive_timeout_ms": 10000, 00:23:39.674 "arbitration_burst": 0, 00:23:39.674 "low_priority_weight": 0, 00:23:39.674 "medium_priority_weight": 0, 00:23:39.674 "high_priority_weight": 0, 00:23:39.674 "nvme_adminq_poll_period_us": 10000, 00:23:39.674 "nvme_ioq_poll_period_us": 0, 00:23:39.674 "io_queue_requests": 0, 00:23:39.674 "delay_cmd_submit": true, 00:23:39.674 "transport_retry_count": 4, 00:23:39.674 "bdev_retry_count": 3, 00:23:39.674 "transport_ack_timeout": 0, 00:23:39.674 "ctrlr_loss_timeout_sec": 0, 00:23:39.674 "reconnect_delay_sec": 0, 00:23:39.674 "fast_io_fail_timeout_sec": 0, 00:23:39.674 "disable_auto_failback": false, 00:23:39.674 "generate_uuids": false, 00:23:39.674 "transport_tos": 0, 00:23:39.674 "nvme_error_stat": false, 00:23:39.674 "rdma_srq_size": 0, 00:23:39.674 "io_path_stat": false, 00:23:39.674 "allow_accel_sequence": false, 00:23:39.674 "rdma_max_cq_size": 0, 00:23:39.674 "rdma_cm_event_timeout_ms": 0, 00:23:39.674 "dhchap_digests": [ 00:23:39.674 "sha256", 00:23:39.674 "sha384", 00:23:39.674 "sha512" 00:23:39.674 ], 00:23:39.674 "dhchap_dhgroups": [ 00:23:39.674 "null", 00:23:39.674 "ffdhe2048", 00:23:39.674 "ffdhe3072", 00:23:39.674 "ffdhe4096", 00:23:39.674 "ffdhe6144", 00:23:39.674 "ffdhe8192" 00:23:39.674 ], 00:23:39.674 "rdma_umr_per_io": false 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "bdev_nvme_set_hotplug", 00:23:39.674 "params": { 00:23:39.674 "period_us": 100000, 00:23:39.674 "enable": false 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "bdev_malloc_create", 00:23:39.674 "params": { 00:23:39.674 "name": "malloc0", 00:23:39.674 "num_blocks": 8192, 00:23:39.674 "block_size": 4096, 00:23:39.674 "physical_block_size": 4096, 00:23:39.674 "uuid": "3cee39e8-b254-4a48-979c-50b55279e7f3", 00:23:39.674 "optimal_io_boundary": 0, 00:23:39.674 "md_size": 0, 00:23:39.674 "dif_type": 0, 00:23:39.674 "dif_is_head_of_md": false, 00:23:39.674 "dif_pi_format": 0 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "bdev_wait_for_examine" 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "subsystem": "nbd", 00:23:39.674 "config": [] 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "subsystem": "scheduler", 00:23:39.674 "config": [ 00:23:39.674 { 00:23:39.674 "method": "framework_set_scheduler", 00:23:39.674 "params": { 00:23:39.674 "name": "static" 00:23:39.674 } 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "subsystem": "nvmf", 00:23:39.674 "config": [ 00:23:39.674 { 00:23:39.674 "method": "nvmf_set_config", 00:23:39.674 "params": { 00:23:39.674 "discovery_filter": "match_any", 00:23:39.674 "admin_cmd_passthru": { 00:23:39.674 "identify_ctrlr": false 00:23:39.674 }, 00:23:39.674 "dhchap_digests": [ 00:23:39.674 "sha256", 00:23:39.674 "sha384", 00:23:39.674 "sha512" 00:23:39.674 ], 00:23:39.674 "dhchap_dhgroups": [ 00:23:39.674 "null", 00:23:39.674 "ffdhe2048", 00:23:39.674 "ffdhe3072", 00:23:39.674 "ffdhe4096", 00:23:39.674 "ffdhe6144", 00:23:39.674 "ffdhe8192" 00:23:39.674 ] 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_set_max_subsystems", 00:23:39.674 "params": { 00:23:39.674 "max_subsystems": 1024 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_set_crdt", 00:23:39.674 "params": { 00:23:39.674 "crdt1": 0, 00:23:39.674 "crdt2": 0, 00:23:39.674 "crdt3": 0 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_create_transport", 00:23:39.674 "params": { 00:23:39.674 "trtype": "TCP", 00:23:39.674 "max_queue_depth": 128, 00:23:39.674 "max_io_qpairs_per_ctrlr": 127, 00:23:39.674 "in_capsule_data_size": 4096, 00:23:39.674 "max_io_size": 131072, 00:23:39.674 "io_unit_size": 131072, 00:23:39.674 "max_aq_depth": 128, 00:23:39.674 "num_shared_buffers": 511, 00:23:39.674 "buf_cache_size": 4294967295, 00:23:39.674 "dif_insert_or_strip": false, 00:23:39.674 "zcopy": false, 00:23:39.674 "c2h_success": false, 00:23:39.674 "sock_priority": 0, 00:23:39.674 "abort_timeout_sec": 1, 00:23:39.674 "ack_timeout": 0, 00:23:39.674 "data_wr_pool_size": 0 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_create_subsystem", 00:23:39.674 "params": { 00:23:39.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.674 "allow_any_host": false, 00:23:39.674 "serial_number": "00000000000000000000", 00:23:39.674 "model_number": "SPDK bdev Controller", 00:23:39.674 "max_namespaces": 32, 00:23:39.674 "min_cntlid": 1, 00:23:39.674 "max_cntlid": 65519, 00:23:39.674 "ana_reporting": false 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_subsystem_add_host", 00:23:39.674 "params": { 00:23:39.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.674 "host": "nqn.2016-06.io.spdk:host1", 00:23:39.674 "psk": "key0" 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_subsystem_add_ns", 00:23:39.674 "params": { 00:23:39.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.674 "namespace": { 00:23:39.674 "nsid": 1, 00:23:39.674 "bdev_name": "malloc0", 00:23:39.674 "nguid": "3CEE39E8B2544A48979C50B55279E7F3", 00:23:39.674 "uuid": "3cee39e8-b254-4a48-979c-50b55279e7f3", 00:23:39.674 "no_auto_visible": false 00:23:39.674 } 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "nvmf_subsystem_add_listener", 00:23:39.674 "params": { 00:23:39.674 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.674 "listen_address": { 00:23:39.674 "trtype": "TCP", 00:23:39.674 "adrfam": "IPv4", 00:23:39.674 "traddr": "10.0.0.2", 00:23:39.674 "trsvcid": "4420" 00:23:39.674 }, 00:23:39.674 "secure_channel": false, 00:23:39.674 "sock_impl": "ssl" 00:23:39.674 } 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }' 00:23:39.674 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:39.674 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:39.674 "subsystems": [ 00:23:39.674 { 00:23:39.674 "subsystem": "keyring", 00:23:39.674 "config": [ 00:23:39.674 { 00:23:39.674 "method": "keyring_file_add_key", 00:23:39.674 "params": { 00:23:39.674 "name": "key0", 00:23:39.674 "path": "/tmp/tmp.Va3na0apAG" 00:23:39.674 } 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "subsystem": "iobuf", 00:23:39.674 "config": [ 00:23:39.674 { 00:23:39.674 "method": "iobuf_set_options", 00:23:39.674 "params": { 00:23:39.674 "small_pool_count": 8192, 00:23:39.674 "large_pool_count": 1024, 00:23:39.674 "small_bufsize": 8192, 00:23:39.674 "large_bufsize": 135168, 00:23:39.674 "enable_numa": false 00:23:39.674 } 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "subsystem": "sock", 00:23:39.674 "config": [ 00:23:39.674 { 00:23:39.674 "method": "sock_set_default_impl", 00:23:39.674 "params": { 00:23:39.674 "impl_name": "posix" 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "sock_impl_set_options", 00:23:39.674 "params": { 00:23:39.674 "impl_name": "ssl", 00:23:39.674 "recv_buf_size": 4096, 00:23:39.674 "send_buf_size": 4096, 00:23:39.674 "enable_recv_pipe": true, 00:23:39.674 "enable_quickack": false, 00:23:39.674 "enable_placement_id": 0, 00:23:39.674 "enable_zerocopy_send_server": true, 00:23:39.674 "enable_zerocopy_send_client": false, 00:23:39.674 "zerocopy_threshold": 0, 00:23:39.674 "tls_version": 0, 00:23:39.674 "enable_ktls": false 00:23:39.674 } 00:23:39.674 }, 00:23:39.674 { 00:23:39.674 "method": "sock_impl_set_options", 00:23:39.674 "params": { 00:23:39.674 "impl_name": "posix", 00:23:39.674 "recv_buf_size": 2097152, 00:23:39.674 "send_buf_size": 2097152, 00:23:39.674 "enable_recv_pipe": true, 00:23:39.674 "enable_quickack": false, 00:23:39.674 "enable_placement_id": 0, 00:23:39.674 "enable_zerocopy_send_server": true, 00:23:39.674 "enable_zerocopy_send_client": false, 00:23:39.674 "zerocopy_threshold": 0, 00:23:39.674 "tls_version": 0, 00:23:39.674 "enable_ktls": false 00:23:39.674 } 00:23:39.674 } 00:23:39.674 ] 00:23:39.674 }, 00:23:39.674 { 00:23:39.675 "subsystem": "vmd", 00:23:39.675 "config": [] 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "subsystem": "accel", 00:23:39.675 "config": [ 00:23:39.675 { 00:23:39.675 "method": "accel_set_options", 00:23:39.675 "params": { 00:23:39.675 "small_cache_size": 128, 00:23:39.675 "large_cache_size": 16, 00:23:39.675 "task_count": 2048, 00:23:39.675 "sequence_count": 2048, 00:23:39.675 "buf_count": 2048 00:23:39.675 } 00:23:39.675 } 00:23:39.675 ] 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "subsystem": "bdev", 00:23:39.675 "config": [ 00:23:39.675 { 00:23:39.675 "method": "bdev_set_options", 00:23:39.675 "params": { 00:23:39.675 "bdev_io_pool_size": 65535, 00:23:39.675 "bdev_io_cache_size": 256, 00:23:39.675 "bdev_auto_examine": true, 00:23:39.675 "iobuf_small_cache_size": 128, 00:23:39.675 "iobuf_large_cache_size": 16 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_raid_set_options", 00:23:39.675 "params": { 00:23:39.675 "process_window_size_kb": 1024, 00:23:39.675 "process_max_bandwidth_mb_sec": 0 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_iscsi_set_options", 00:23:39.675 "params": { 00:23:39.675 "timeout_sec": 30 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_nvme_set_options", 00:23:39.675 "params": { 00:23:39.675 "action_on_timeout": "none", 00:23:39.675 "timeout_us": 0, 00:23:39.675 "timeout_admin_us": 0, 00:23:39.675 "keep_alive_timeout_ms": 10000, 00:23:39.675 "arbitration_burst": 0, 00:23:39.675 "low_priority_weight": 0, 00:23:39.675 "medium_priority_weight": 0, 00:23:39.675 "high_priority_weight": 0, 00:23:39.675 "nvme_adminq_poll_period_us": 10000, 00:23:39.675 "nvme_ioq_poll_period_us": 0, 00:23:39.675 "io_queue_requests": 512, 00:23:39.675 "delay_cmd_submit": true, 00:23:39.675 "transport_retry_count": 4, 00:23:39.675 "bdev_retry_count": 3, 00:23:39.675 "transport_ack_timeout": 0, 00:23:39.675 "ctrlr_loss_timeout_sec": 0, 00:23:39.675 "reconnect_delay_sec": 0, 00:23:39.675 "fast_io_fail_timeout_sec": 0, 00:23:39.675 "disable_auto_failback": false, 00:23:39.675 "generate_uuids": false, 00:23:39.675 "transport_tos": 0, 00:23:39.675 "nvme_error_stat": false, 00:23:39.675 "rdma_srq_size": 0, 00:23:39.675 "io_path_stat": false, 00:23:39.675 "allow_accel_sequence": false, 00:23:39.675 "rdma_max_cq_size": 0, 00:23:39.675 "rdma_cm_event_timeout_ms": 0, 00:23:39.675 "dhchap_digests": [ 00:23:39.675 "sha256", 00:23:39.675 "sha384", 00:23:39.675 "sha512" 00:23:39.675 ], 00:23:39.675 "dhchap_dhgroups": [ 00:23:39.675 "null", 00:23:39.675 "ffdhe2048", 00:23:39.675 "ffdhe3072", 00:23:39.675 "ffdhe4096", 00:23:39.675 "ffdhe6144", 00:23:39.675 "ffdhe8192" 00:23:39.675 ], 00:23:39.675 "rdma_umr_per_io": false 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_nvme_attach_controller", 00:23:39.675 "params": { 00:23:39.675 "name": "nvme0", 00:23:39.675 "trtype": "TCP", 00:23:39.675 "adrfam": "IPv4", 00:23:39.675 "traddr": "10.0.0.2", 00:23:39.675 "trsvcid": "4420", 00:23:39.675 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:39.675 "prchk_reftag": false, 00:23:39.675 "prchk_guard": false, 00:23:39.675 "ctrlr_loss_timeout_sec": 0, 00:23:39.675 "reconnect_delay_sec": 0, 00:23:39.675 "fast_io_fail_timeout_sec": 0, 00:23:39.675 "psk": "key0", 00:23:39.675 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:39.675 "hdgst": false, 00:23:39.675 "ddgst": false, 00:23:39.675 "multipath": "multipath" 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_nvme_set_hotplug", 00:23:39.675 "params": { 00:23:39.675 "period_us": 100000, 00:23:39.675 "enable": false 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_enable_histogram", 00:23:39.675 "params": { 00:23:39.675 "name": "nvme0n1", 00:23:39.675 "enable": true 00:23:39.675 } 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "method": "bdev_wait_for_examine" 00:23:39.675 } 00:23:39.675 ] 00:23:39.675 }, 00:23:39.675 { 00:23:39.675 "subsystem": "nbd", 00:23:39.675 "config": [] 00:23:39.675 } 00:23:39.675 ] 00:23:39.675 }' 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1024820 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024820 ']' 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024820 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.675 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024820 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024820' 00:23:39.935 killing process with pid 1024820 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024820 00:23:39.935 Received shutdown signal, test time was about 1.000000 seconds 00:23:39.935 00:23:39.935 Latency(us) 00:23:39.935 [2024-12-13T05:29:31.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:39.935 [2024-12-13T05:29:31.589Z] =================================================================================================================== 00:23:39.935 [2024-12-13T05:29:31.589Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024820 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1024794 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1024794 ']' 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1024794 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1024794 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1024794' 00:23:39.935 killing process with pid 1024794 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1024794 00:23:39.935 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1024794 00:23:40.195 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:40.195 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:40.195 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:40.195 "subsystems": [ 00:23:40.195 { 00:23:40.195 "subsystem": "keyring", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "keyring_file_add_key", 00:23:40.195 "params": { 00:23:40.195 "name": "key0", 00:23:40.195 "path": "/tmp/tmp.Va3na0apAG" 00:23:40.195 } 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "iobuf", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "iobuf_set_options", 00:23:40.195 "params": { 00:23:40.195 "small_pool_count": 8192, 00:23:40.195 "large_pool_count": 1024, 00:23:40.195 "small_bufsize": 8192, 00:23:40.195 "large_bufsize": 135168, 00:23:40.195 "enable_numa": false 00:23:40.195 } 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "sock", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "sock_set_default_impl", 00:23:40.195 "params": { 00:23:40.195 "impl_name": "posix" 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "sock_impl_set_options", 00:23:40.195 "params": { 00:23:40.195 "impl_name": "ssl", 00:23:40.195 "recv_buf_size": 4096, 00:23:40.195 "send_buf_size": 4096, 00:23:40.195 "enable_recv_pipe": true, 00:23:40.195 "enable_quickack": false, 00:23:40.195 "enable_placement_id": 0, 00:23:40.195 "enable_zerocopy_send_server": true, 00:23:40.195 "enable_zerocopy_send_client": false, 00:23:40.195 "zerocopy_threshold": 0, 00:23:40.195 "tls_version": 0, 00:23:40.195 "enable_ktls": false 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "sock_impl_set_options", 00:23:40.195 "params": { 00:23:40.195 "impl_name": "posix", 00:23:40.195 "recv_buf_size": 2097152, 00:23:40.195 "send_buf_size": 2097152, 00:23:40.195 "enable_recv_pipe": true, 00:23:40.195 "enable_quickack": false, 00:23:40.195 "enable_placement_id": 0, 00:23:40.195 "enable_zerocopy_send_server": true, 00:23:40.195 "enable_zerocopy_send_client": false, 00:23:40.195 "zerocopy_threshold": 0, 00:23:40.195 "tls_version": 0, 00:23:40.195 "enable_ktls": false 00:23:40.195 } 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "vmd", 00:23:40.195 "config": [] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "accel", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "accel_set_options", 00:23:40.195 "params": { 00:23:40.195 "small_cache_size": 128, 00:23:40.195 "large_cache_size": 16, 00:23:40.195 "task_count": 2048, 00:23:40.195 "sequence_count": 2048, 00:23:40.195 "buf_count": 2048 00:23:40.195 } 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "bdev", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "bdev_set_options", 00:23:40.195 "params": { 00:23:40.195 "bdev_io_pool_size": 65535, 00:23:40.195 "bdev_io_cache_size": 256, 00:23:40.195 "bdev_auto_examine": true, 00:23:40.195 "iobuf_small_cache_size": 128, 00:23:40.195 "iobuf_large_cache_size": 16 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_raid_set_options", 00:23:40.195 "params": { 00:23:40.195 "process_window_size_kb": 1024, 00:23:40.195 "process_max_bandwidth_mb_sec": 0 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_iscsi_set_options", 00:23:40.195 "params": { 00:23:40.195 "timeout_sec": 30 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_nvme_set_options", 00:23:40.195 "params": { 00:23:40.195 "action_on_timeout": "none", 00:23:40.195 "timeout_us": 0, 00:23:40.195 "timeout_admin_us": 0, 00:23:40.195 "keep_alive_timeout_ms": 10000, 00:23:40.195 "arbitration_burst": 0, 00:23:40.195 "low_priority_weight": 0, 00:23:40.195 "medium_priority_weight": 0, 00:23:40.195 "high_priority_weight": 0, 00:23:40.195 "nvme_adminq_poll_period_us": 10000, 00:23:40.195 "nvme_ioq_poll_period_us": 0, 00:23:40.195 "io_queue_requests": 0, 00:23:40.195 "delay_cmd_submit": true, 00:23:40.195 "transport_retry_count": 4, 00:23:40.195 "bdev_retry_count": 3, 00:23:40.195 "transport_ack_timeout": 0, 00:23:40.195 "ctrlr_loss_timeout_sec": 0, 00:23:40.195 "reconnect_delay_sec": 0, 00:23:40.195 "fast_io_fail_timeout_sec": 0, 00:23:40.195 "disable_auto_failback": false, 00:23:40.195 "generate_uuids": false, 00:23:40.195 "transport_tos": 0, 00:23:40.195 "nvme_error_stat": false, 00:23:40.195 "rdma_srq_size": 0, 00:23:40.195 "io_path_stat": false, 00:23:40.195 "allow_accel_sequence": false, 00:23:40.195 "rdma_max_cq_size": 0, 00:23:40.195 "rdma_cm_event_timeout_ms": 0, 00:23:40.195 "dhchap_digests": [ 00:23:40.195 "sha256", 00:23:40.195 "sha384", 00:23:40.195 "sha512" 00:23:40.195 ], 00:23:40.195 "dhchap_dhgroups": [ 00:23:40.195 "null", 00:23:40.195 "ffdhe2048", 00:23:40.195 "ffdhe3072", 00:23:40.195 "ffdhe4096", 00:23:40.195 "ffdhe6144", 00:23:40.195 "ffdhe8192" 00:23:40.195 ], 00:23:40.195 "rdma_umr_per_io": false 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_nvme_set_hotplug", 00:23:40.195 "params": { 00:23:40.195 "period_us": 100000, 00:23:40.195 "enable": false 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_malloc_create", 00:23:40.195 "params": { 00:23:40.195 "name": "malloc0", 00:23:40.195 "num_blocks": 8192, 00:23:40.195 "block_size": 4096, 00:23:40.195 "physical_block_size": 4096, 00:23:40.195 "uuid": "3cee39e8-b254-4a48-979c-50b55279e7f3", 00:23:40.195 "optimal_io_boundary": 0, 00:23:40.195 "md_size": 0, 00:23:40.195 "dif_type": 0, 00:23:40.195 "dif_is_head_of_md": false, 00:23:40.195 "dif_pi_format": 0 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "bdev_wait_for_examine" 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "nbd", 00:23:40.195 "config": [] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "scheduler", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "framework_set_scheduler", 00:23:40.195 "params": { 00:23:40.195 "name": "static" 00:23:40.195 } 00:23:40.195 } 00:23:40.195 ] 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "subsystem": "nvmf", 00:23:40.195 "config": [ 00:23:40.195 { 00:23:40.195 "method": "nvmf_set_config", 00:23:40.195 "params": { 00:23:40.195 "discovery_filter": "match_any", 00:23:40.195 "admin_cmd_passthru": { 00:23:40.195 "identify_ctrlr": false 00:23:40.195 }, 00:23:40.195 "dhchap_digests": [ 00:23:40.195 "sha256", 00:23:40.195 "sha384", 00:23:40.195 "sha512" 00:23:40.195 ], 00:23:40.195 "dhchap_dhgroups": [ 00:23:40.195 "null", 00:23:40.195 "ffdhe2048", 00:23:40.195 "ffdhe3072", 00:23:40.195 "ffdhe4096", 00:23:40.195 "ffdhe6144", 00:23:40.195 "ffdhe8192" 00:23:40.195 ] 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "nvmf_set_max_subsystems", 00:23:40.195 "params": { 00:23:40.195 "max_subsystems": 1024 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "nvmf_set_crdt", 00:23:40.195 "params": { 00:23:40.195 "crdt1": 0, 00:23:40.195 "crdt2": 0, 00:23:40.195 "crdt3": 0 00:23:40.195 } 00:23:40.195 }, 00:23:40.195 { 00:23:40.195 "method": "nvmf_create_transport", 00:23:40.195 "params": { 00:23:40.195 "trtype": "TCP", 00:23:40.195 "max_queue_depth": 128, 00:23:40.195 "max_io_qpairs_per_ctrlr": 127, 00:23:40.195 "in_capsule_data_size": 4096, 00:23:40.195 "max_io_size": 131072, 00:23:40.196 "io_unit_size": 131 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:40.196 072, 00:23:40.196 "max_aq_depth": 128, 00:23:40.196 "num_shared_buffers": 511, 00:23:40.196 "buf_cache_size": 4294967295, 00:23:40.196 "dif_insert_or_strip": false, 00:23:40.196 "zcopy": false, 00:23:40.196 "c2h_success": false, 00:23:40.196 "sock_priority": 0, 00:23:40.196 "abort_timeout_sec": 1, 00:23:40.196 "ack_timeout": 0, 00:23:40.196 "data_wr_pool_size": 0 00:23:40.196 } 00:23:40.196 }, 00:23:40.196 { 00:23:40.196 "method": "nvmf_create_subsystem", 00:23:40.196 "params": { 00:23:40.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.196 "allow_any_host": false, 00:23:40.196 "serial_number": "00000000000000000000", 00:23:40.196 "model_number": "SPDK bdev Controller", 00:23:40.196 "max_namespaces": 32, 00:23:40.196 "min_cntlid": 1, 00:23:40.196 "max_cntlid": 65519, 00:23:40.196 "ana_reporting": false 00:23:40.196 } 00:23:40.196 }, 00:23:40.196 { 00:23:40.196 "method": "nvmf_subsystem_add_host", 00:23:40.196 "params": { 00:23:40.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.196 "host": "nqn.2016-06.io.spdk:host1", 00:23:40.196 "psk": "key0" 00:23:40.196 } 00:23:40.196 }, 00:23:40.196 { 00:23:40.196 "method": "nvmf_subsystem_add_ns", 00:23:40.196 "params": { 00:23:40.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.196 "namespace": { 00:23:40.196 "nsid": 1, 00:23:40.196 "bdev_name": "malloc0", 00:23:40.196 "nguid": "3CEE39E8B2544A48979C50B55279E7F3", 00:23:40.196 "uuid": "3cee39e8-b254-4a48-979c-50b55279e7f3", 00:23:40.196 "no_auto_visible": false 00:23:40.196 } 00:23:40.196 } 00:23:40.196 }, 00:23:40.196 { 00:23:40.196 "method": "nvmf_subsystem_add_listener", 00:23:40.196 "params": { 00:23:40.196 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.196 "listen_address": { 00:23:40.196 "trtype": "TCP", 00:23:40.196 "adrfam": "IPv4", 00:23:40.196 "traddr": "10.0.0.2", 00:23:40.196 "trsvcid": "4420" 00:23:40.196 }, 00:23:40.196 "secure_channel": false, 00:23:40.196 "sock_impl": "ssl" 00:23:40.196 } 00:23:40.196 } 00:23:40.196 ] 00:23:40.196 } 00:23:40.196 ] 00:23:40.196 }' 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=1025279 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 1025279 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025279 ']' 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.196 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.196 [2024-12-13 06:29:31.772538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:40.196 [2024-12-13 06:29:31.772586] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:40.455 [2024-12-13 06:29:31.850595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.455 [2024-12-13 06:29:31.871696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:40.455 [2024-12-13 06:29:31.871732] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:40.455 [2024-12-13 06:29:31.871739] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:40.455 [2024-12-13 06:29:31.871745] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:40.455 [2024-12-13 06:29:31.871750] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:40.455 [2024-12-13 06:29:31.872267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.455 [2024-12-13 06:29:32.078934] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.714 [2024-12-13 06:29:32.110980] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.714 [2024-12-13 06:29:32.111172] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.972 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.972 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:40.972 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.972 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.972 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.231 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.231 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1025515 00:23:41.231 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1025515 /var/tmp/bdevperf.sock 00:23:41.231 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1025515 ']' 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:41.232 "subsystems": [ 00:23:41.232 { 00:23:41.232 "subsystem": "keyring", 00:23:41.232 "config": [ 00:23:41.232 { 00:23:41.232 "method": "keyring_file_add_key", 00:23:41.232 "params": { 00:23:41.232 "name": "key0", 00:23:41.232 "path": "/tmp/tmp.Va3na0apAG" 00:23:41.232 } 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "iobuf", 00:23:41.232 "config": [ 00:23:41.232 { 00:23:41.232 "method": "iobuf_set_options", 00:23:41.232 "params": { 00:23:41.232 "small_pool_count": 8192, 00:23:41.232 "large_pool_count": 1024, 00:23:41.232 "small_bufsize": 8192, 00:23:41.232 "large_bufsize": 135168, 00:23:41.232 "enable_numa": false 00:23:41.232 } 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "sock", 00:23:41.232 "config": [ 00:23:41.232 { 00:23:41.232 "method": "sock_set_default_impl", 00:23:41.232 "params": { 00:23:41.232 "impl_name": "posix" 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "sock_impl_set_options", 00:23:41.232 "params": { 00:23:41.232 "impl_name": "ssl", 00:23:41.232 "recv_buf_size": 4096, 00:23:41.232 "send_buf_size": 4096, 00:23:41.232 "enable_recv_pipe": true, 00:23:41.232 "enable_quickack": false, 00:23:41.232 "enable_placement_id": 0, 00:23:41.232 "enable_zerocopy_send_server": true, 00:23:41.232 "enable_zerocopy_send_client": false, 00:23:41.232 "zerocopy_threshold": 0, 00:23:41.232 "tls_version": 0, 00:23:41.232 "enable_ktls": false 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "sock_impl_set_options", 00:23:41.232 "params": { 00:23:41.232 "impl_name": "posix", 00:23:41.232 "recv_buf_size": 2097152, 00:23:41.232 "send_buf_size": 2097152, 00:23:41.232 "enable_recv_pipe": true, 00:23:41.232 "enable_quickack": false, 00:23:41.232 "enable_placement_id": 0, 00:23:41.232 "enable_zerocopy_send_server": true, 00:23:41.232 "enable_zerocopy_send_client": false, 00:23:41.232 "zerocopy_threshold": 0, 00:23:41.232 "tls_version": 0, 00:23:41.232 "enable_ktls": false 00:23:41.232 } 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "vmd", 00:23:41.232 "config": [] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "accel", 00:23:41.232 "config": [ 00:23:41.232 { 00:23:41.232 "method": "accel_set_options", 00:23:41.232 "params": { 00:23:41.232 "small_cache_size": 128, 00:23:41.232 "large_cache_size": 16, 00:23:41.232 "task_count": 2048, 00:23:41.232 "sequence_count": 2048, 00:23:41.232 "buf_count": 2048 00:23:41.232 } 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "bdev", 00:23:41.232 "config": [ 00:23:41.232 { 00:23:41.232 "method": "bdev_set_options", 00:23:41.232 "params": { 00:23:41.232 "bdev_io_pool_size": 65535, 00:23:41.232 "bdev_io_cache_size": 256, 00:23:41.232 "bdev_auto_examine": true, 00:23:41.232 "iobuf_small_cache_size": 128, 00:23:41.232 "iobuf_large_cache_size": 16 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_raid_set_options", 00:23:41.232 "params": { 00:23:41.232 "process_window_size_kb": 1024, 00:23:41.232 "process_max_bandwidth_mb_sec": 0 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_iscsi_set_options", 00:23:41.232 "params": { 00:23:41.232 "timeout_sec": 30 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_nvme_set_options", 00:23:41.232 "params": { 00:23:41.232 "action_on_timeout": "none", 00:23:41.232 "timeout_us": 0, 00:23:41.232 "timeout_admin_us": 0, 00:23:41.232 "keep_alive_timeout_ms": 10000, 00:23:41.232 "arbitration_burst": 0, 00:23:41.232 "low_priority_weight": 0, 00:23:41.232 "medium_priority_weight": 0, 00:23:41.232 "high_priority_weight": 0, 00:23:41.232 "nvme_adminq_poll_period_us": 10000, 00:23:41.232 "nvme_ioq_poll_period_us": 0, 00:23:41.232 "io_queue_requests": 512, 00:23:41.232 "delay_cmd_submit": true, 00:23:41.232 "transport_retry_count": 4, 00:23:41.232 "bdev_retry_count": 3, 00:23:41.232 "transport_ack_timeout": 0, 00:23:41.232 "ctrlr_loss_timeout_sec": 0, 00:23:41.232 "reconnect_delay_sec": 0, 00:23:41.232 "fast_io_fail_timeout_sec": 0, 00:23:41.232 "disable_auto_failback": false, 00:23:41.232 "generate_uuids": false, 00:23:41.232 "transport_tos": 0, 00:23:41.232 "nvme_error_stat": false, 00:23:41.232 "rdma_srq_size": 0, 00:23:41.232 "io_path_stat": false, 00:23:41.232 "allow_accel_sequence": false, 00:23:41.232 "rdma_max_cq_size": 0, 00:23:41.232 "rdma_cm_event_timeout_ms": 0, 00:23:41.232 "dhchap_digests": [ 00:23:41.232 "sha256", 00:23:41.232 "sha384", 00:23:41.232 "sha512" 00:23:41.232 ], 00:23:41.232 "dhchap_dhgroups": [ 00:23:41.232 "null", 00:23:41.232 "ffdhe2048", 00:23:41.232 "ffdhe3072", 00:23:41.232 "ffdhe4096", 00:23:41.232 "ffdhe6144", 00:23:41.232 "ffdhe8192" 00:23:41.232 ], 00:23:41.232 "rdma_umr_per_io": false 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_nvme_attach_controller", 00:23:41.232 "params": { 00:23:41.232 "name": "nvme0", 00:23:41.232 "trtype": "TCP", 00:23:41.232 "adrfam": "IPv4", 00:23:41.232 "traddr": "10.0.0.2", 00:23:41.232 "trsvcid": "4420", 00:23:41.232 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:41.232 "prchk_reftag": false, 00:23:41.232 "prchk_guard": false, 00:23:41.232 "ctrlr_loss_timeout_sec": 0, 00:23:41.232 "reconnect_delay_sec": 0, 00:23:41.232 "fast_io_fail_timeout_sec": 0, 00:23:41.232 "psk": "key0", 00:23:41.232 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:41.232 "hdgst": false, 00:23:41.232 "ddgst": false, 00:23:41.232 "multipath": "multipath" 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_nvme_set_hotplug", 00:23:41.232 "params": { 00:23:41.232 "period_us": 100000, 00:23:41.232 "enable": false 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_enable_histogram", 00:23:41.232 "params": { 00:23:41.232 "name": "nvme0n1", 00:23:41.232 "enable": true 00:23:41.232 } 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "method": "bdev_wait_for_examine" 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }, 00:23:41.232 { 00:23:41.232 "subsystem": "nbd", 00:23:41.232 "config": [] 00:23:41.232 } 00:23:41.232 ] 00:23:41.232 }' 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:41.232 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.232 [2024-12-13 06:29:32.706966] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:41.232 [2024-12-13 06:29:32.707014] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1025515 ] 00:23:41.232 [2024-12-13 06:29:32.781881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.232 [2024-12-13 06:29:32.803676] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.491 [2024-12-13 06:29:32.952251] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.058 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:42.058 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:42.058 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:42.058 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:42.316 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.316 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.316 Running I/O for 1 seconds... 00:23:43.252 5306.00 IOPS, 20.73 MiB/s 00:23:43.252 Latency(us) 00:23:43.252 [2024-12-13T05:29:34.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.252 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.252 Verification LBA range: start 0x0 length 0x2000 00:23:43.252 nvme0n1 : 1.02 5342.98 20.87 0.00 0.00 23786.81 4837.18 33953.89 00:23:43.252 [2024-12-13T05:29:34.906Z] =================================================================================================================== 00:23:43.252 [2024-12-13T05:29:34.906Z] Total : 5342.98 20.87 0.00 0.00 23786.81 4837.18 33953.89 00:23:43.252 { 00:23:43.252 "results": [ 00:23:43.252 { 00:23:43.252 "job": "nvme0n1", 00:23:43.252 "core_mask": "0x2", 00:23:43.252 "workload": "verify", 00:23:43.252 "status": "finished", 00:23:43.252 "verify_range": { 00:23:43.252 "start": 0, 00:23:43.252 "length": 8192 00:23:43.252 }, 00:23:43.252 "queue_depth": 128, 00:23:43.252 "io_size": 4096, 00:23:43.252 "runtime": 1.017036, 00:23:43.252 "iops": 5342.977043093853, 00:23:43.252 "mibps": 20.871004074585365, 00:23:43.252 "io_failed": 0, 00:23:43.252 "io_timeout": 0, 00:23:43.252 "avg_latency_us": 23786.807326007325, 00:23:43.252 "min_latency_us": 4837.1809523809525, 00:23:43.252 "max_latency_us": 33953.88952380952 00:23:43.252 } 00:23:43.252 ], 00:23:43.252 "core_count": 1 00:23:43.252 } 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:23:43.252 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:43.252 nvmf_trace.0 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1025515 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025515 ']' 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025515 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.511 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025515 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025515' 00:23:43.511 killing process with pid 1025515 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025515 00:23:43.511 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.511 00:23:43.511 Latency(us) 00:23:43.511 [2024-12-13T05:29:35.165Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.511 [2024-12-13T05:29:35.165Z] =================================================================================================================== 00:23:43.511 [2024-12-13T05:29:35.165Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025515 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:43.511 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:43.770 rmmod nvme_tcp 00:23:43.770 rmmod nvme_fabrics 00:23:43.770 rmmod nvme_keyring 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 1025279 ']' 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 1025279 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1025279 ']' 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1025279 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1025279 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1025279' 00:23:43.770 killing process with pid 1025279 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1025279 00:23:43.770 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1025279 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:44.029 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.SS10HDVRyj /tmp/tmp.wU90XNfpTa /tmp/tmp.Va3na0apAG 00:23:45.932 00:23:45.932 real 1m18.701s 00:23:45.932 user 2m0.299s 00:23:45.932 sys 0m30.473s 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.932 ************************************ 00:23:45.932 END TEST nvmf_tls 00:23:45.932 ************************************ 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:45.932 06:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:46.192 ************************************ 00:23:46.192 START TEST nvmf_fips 00:23:46.192 ************************************ 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:46.192 * Looking for test storage... 00:23:46.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:23:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.192 --rc genhtml_branch_coverage=1 00:23:46.192 --rc genhtml_function_coverage=1 00:23:46.192 --rc genhtml_legend=1 00:23:46.192 --rc geninfo_all_blocks=1 00:23:46.192 --rc geninfo_unexecuted_blocks=1 00:23:46.192 00:23:46.192 ' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:23:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.192 --rc genhtml_branch_coverage=1 00:23:46.192 --rc genhtml_function_coverage=1 00:23:46.192 --rc genhtml_legend=1 00:23:46.192 --rc geninfo_all_blocks=1 00:23:46.192 --rc geninfo_unexecuted_blocks=1 00:23:46.192 00:23:46.192 ' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:23:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.192 --rc genhtml_branch_coverage=1 00:23:46.192 --rc genhtml_function_coverage=1 00:23:46.192 --rc genhtml_legend=1 00:23:46.192 --rc geninfo_all_blocks=1 00:23:46.192 --rc geninfo_unexecuted_blocks=1 00:23:46.192 00:23:46.192 ' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:23:46.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:46.192 --rc genhtml_branch_coverage=1 00:23:46.192 --rc genhtml_function_coverage=1 00:23:46.192 --rc genhtml_legend=1 00:23:46.192 --rc geninfo_all_blocks=1 00:23:46.192 --rc geninfo_unexecuted_blocks=1 00:23:46.192 00:23:46.192 ' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:46.192 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:46.193 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:23:46.193 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:23:46.452 Error setting digest 00:23:46.452 40A2AA5EBC7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:23:46.452 40A2AA5EBC7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:23:46.452 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:23:53.021 Found 0000:af:00.0 (0x8086 - 0x159b) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:23:53.021 Found 0000:af:00.1 (0x8086 - 0x159b) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.021 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:23:53.022 Found net devices under 0000:af:00.0: cvl_0_0 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:23:53.022 Found net devices under 0000:af:00.1: cvl_0_1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:23:53.022 00:23:53.022 --- 10.0.0.2 ping statistics --- 00:23:53.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.022 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:23:53.022 00:23:53.022 --- 10.0.0.1 ping statistics --- 00:23:53.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.022 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=1029410 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 1029410 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029410 ']' 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.022 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 [2024-12-13 06:29:43.933351] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:53.022 [2024-12-13 06:29:43.933399] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.022 [2024-12-13 06:29:43.994519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.022 [2024-12-13 06:29:44.015617] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.022 [2024-12-13 06:29:44.015651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.022 [2024-12-13 06:29:44.015658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.022 [2024-12-13 06:29:44.015664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.022 [2024-12-13 06:29:44.015669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.022 [2024-12-13 06:29:44.016148] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.BFj 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.BFj 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.BFj 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.BFj 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.022 [2024-12-13 06:29:44.326650] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.022 [2024-12-13 06:29:44.342667] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.022 [2024-12-13 06:29:44.342861] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.022 malloc0 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1029491 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1029491 /var/tmp/bdevperf.sock 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1029491 ']' 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.022 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.023 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.023 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.023 [2024-12-13 06:29:44.468821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:23:53.023 [2024-12-13 06:29:44.468867] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1029491 ] 00:23:53.023 [2024-12-13 06:29:44.541045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.023 [2024-12-13 06:29:44.563698] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.023 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:53.023 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:23:53.023 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.BFj 00:23:53.282 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:53.542 [2024-12-13 06:29:45.015499] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:53.542 TLSTESTn1 00:23:53.542 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.542 Running I/O for 10 seconds... 00:23:55.862 5409.00 IOPS, 21.13 MiB/s [2024-12-13T05:29:48.452Z] 5509.00 IOPS, 21.52 MiB/s [2024-12-13T05:29:49.388Z] 5539.33 IOPS, 21.64 MiB/s [2024-12-13T05:29:50.324Z] 5353.25 IOPS, 20.91 MiB/s [2024-12-13T05:29:51.260Z] 5256.20 IOPS, 20.53 MiB/s [2024-12-13T05:29:52.636Z] 5191.00 IOPS, 20.28 MiB/s [2024-12-13T05:29:53.573Z] 5151.86 IOPS, 20.12 MiB/s [2024-12-13T05:29:54.508Z] 5112.75 IOPS, 19.97 MiB/s [2024-12-13T05:29:55.444Z] 5051.00 IOPS, 19.73 MiB/s [2024-12-13T05:29:55.444Z] 5031.80 IOPS, 19.66 MiB/s 00:24:03.790 Latency(us) 00:24:03.790 [2024-12-13T05:29:55.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.790 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:03.790 Verification LBA range: start 0x0 length 0x2000 00:24:03.790 TLSTESTn1 : 10.02 5035.84 19.67 0.00 0.00 25381.02 6865.68 34952.53 00:24:03.790 [2024-12-13T05:29:55.444Z] =================================================================================================================== 00:24:03.790 [2024-12-13T05:29:55.444Z] Total : 5035.84 19.67 0.00 0.00 25381.02 6865.68 34952.53 00:24:03.790 { 00:24:03.790 "results": [ 00:24:03.790 { 00:24:03.790 "job": "TLSTESTn1", 00:24:03.790 "core_mask": "0x4", 00:24:03.790 "workload": "verify", 00:24:03.790 "status": "finished", 00:24:03.790 "verify_range": { 00:24:03.790 "start": 0, 00:24:03.790 "length": 8192 00:24:03.790 }, 00:24:03.790 "queue_depth": 128, 00:24:03.790 "io_size": 4096, 00:24:03.790 "runtime": 10.017002, 00:24:03.790 "iops": 5035.838068116588, 00:24:03.790 "mibps": 19.671242453580422, 00:24:03.790 "io_failed": 0, 00:24:03.790 "io_timeout": 0, 00:24:03.790 "avg_latency_us": 25381.01508418576, 00:24:03.790 "min_latency_us": 6865.676190476191, 00:24:03.790 "max_latency_us": 34952.53333333333 00:24:03.790 } 00:24:03.790 ], 00:24:03.790 "core_count": 1 00:24:03.790 } 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:03.790 nvmf_trace.0 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1029491 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029491 ']' 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029491 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029491 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029491' 00:24:03.790 killing process with pid 1029491 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029491 00:24:03.790 Received shutdown signal, test time was about 10.000000 seconds 00:24:03.790 00:24:03.790 Latency(us) 00:24:03.790 [2024-12-13T05:29:55.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:03.790 [2024-12-13T05:29:55.444Z] =================================================================================================================== 00:24:03.790 [2024-12-13T05:29:55.444Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:03.790 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029491 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:04.049 rmmod nvme_tcp 00:24:04.049 rmmod nvme_fabrics 00:24:04.049 rmmod nvme_keyring 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 1029410 ']' 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 1029410 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1029410 ']' 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1029410 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1029410 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1029410' 00:24:04.049 killing process with pid 1029410 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1029410 00:24:04.049 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1029410 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.308 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.BFj 00:24:06.841 00:24:06.841 real 0m20.314s 00:24:06.841 user 0m20.470s 00:24:06.841 sys 0m10.229s 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:06.841 ************************************ 00:24:06.841 END TEST nvmf_fips 00:24:06.841 ************************************ 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:06.841 ************************************ 00:24:06.841 START TEST nvmf_control_msg_list 00:24:06.841 ************************************ 00:24:06.841 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:06.841 * Looking for test storage... 00:24:06.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:06.841 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:06.841 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:06.841 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:06.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.842 --rc genhtml_branch_coverage=1 00:24:06.842 --rc genhtml_function_coverage=1 00:24:06.842 --rc genhtml_legend=1 00:24:06.842 --rc geninfo_all_blocks=1 00:24:06.842 --rc geninfo_unexecuted_blocks=1 00:24:06.842 00:24:06.842 ' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:06.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.842 --rc genhtml_branch_coverage=1 00:24:06.842 --rc genhtml_function_coverage=1 00:24:06.842 --rc genhtml_legend=1 00:24:06.842 --rc geninfo_all_blocks=1 00:24:06.842 --rc geninfo_unexecuted_blocks=1 00:24:06.842 00:24:06.842 ' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:06.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.842 --rc genhtml_branch_coverage=1 00:24:06.842 --rc genhtml_function_coverage=1 00:24:06.842 --rc genhtml_legend=1 00:24:06.842 --rc geninfo_all_blocks=1 00:24:06.842 --rc geninfo_unexecuted_blocks=1 00:24:06.842 00:24:06.842 ' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:06.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:06.842 --rc genhtml_branch_coverage=1 00:24:06.842 --rc genhtml_function_coverage=1 00:24:06.842 --rc genhtml_legend=1 00:24:06.842 --rc geninfo_all_blocks=1 00:24:06.842 --rc geninfo_unexecuted_blocks=1 00:24:06.842 00:24:06.842 ' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:06.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:06.842 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:06.843 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:13.411 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:13.411 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:13.412 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:13.412 Found net devices under 0000:af:00.0: cvl_0_0 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:13.412 Found net devices under 0000:af:00.1: cvl_0_1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:13.412 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:13.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.454 ms 00:24:13.412 00:24:13.412 --- 10.0.0.2 ping statistics --- 00:24:13.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.412 rtt min/avg/max/mdev = 0.454/0.454/0.454/0.000 ms 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:13.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:24:13.412 00:24:13.412 --- 10.0.0.1 ping statistics --- 00:24:13.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.412 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=1034865 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 1034865 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1034865 ']' 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.412 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 [2024-12-13 06:30:04.139538] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:13.413 [2024-12-13 06:30:04.139589] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.413 [2024-12-13 06:30:04.219637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.413 [2024-12-13 06:30:04.242287] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.413 [2024-12-13 06:30:04.242328] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.413 [2024-12-13 06:30:04.242337] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.413 [2024-12-13 06:30:04.242345] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.413 [2024-12-13 06:30:04.242350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.413 [2024-12-13 06:30:04.242845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 [2024-12-13 06:30:04.386391] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 Malloc0 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:13.413 [2024-12-13 06:30:04.434608] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1034890 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1034891 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1034892 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1034890 00:24:13.413 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:13.413 [2024-12-13 06:30:04.508990] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.413 [2024-12-13 06:30:04.529058] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.413 [2024-12-13 06:30:04.529193] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:13.979 Initializing NVMe Controllers 00:24:13.979 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:13.979 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:13.979 Initialization complete. Launching workers. 00:24:13.979 ======================================================== 00:24:13.979 Latency(us) 00:24:13.979 Device Information : IOPS MiB/s Average min max 00:24:13.979 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 6336.99 24.75 157.46 128.66 378.49 00:24:13.979 ======================================================== 00:24:13.979 Total : 6336.99 24.75 157.46 128.66 378.49 00:24:13.979 00:24:14.238 Initializing NVMe Controllers 00:24:14.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:14.238 Initialization complete. Launching workers. 00:24:14.238 ======================================================== 00:24:14.238 Latency(us) 00:24:14.238 Device Information : IOPS MiB/s Average min max 00:24:14.238 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 41080.45 40427.71 41969.83 00:24:14.238 ======================================================== 00:24:14.238 Total : 25.00 0.10 41080.45 40427.71 41969.83 00:24:14.238 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1034891 00:24:14.238 Initializing NVMe Controllers 00:24:14.238 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:14.238 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:14.238 Initialization complete. Launching workers. 00:24:14.238 ======================================================== 00:24:14.238 Latency(us) 00:24:14.238 Device Information : IOPS MiB/s Average min max 00:24:14.238 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 6406.96 25.03 155.72 128.84 380.98 00:24:14.238 ======================================================== 00:24:14.238 Total : 6406.96 25.03 155.72 128.84 380.98 00:24:14.238 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1034892 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:14.238 rmmod nvme_tcp 00:24:14.238 rmmod nvme_fabrics 00:24:14.238 rmmod nvme_keyring 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 1034865 ']' 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 1034865 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1034865 ']' 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1034865 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1034865 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1034865' 00:24:14.238 killing process with pid 1034865 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1034865 00:24:14.238 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1034865 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:14.498 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.404 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:16.404 00:24:16.404 real 0m10.044s 00:24:16.404 user 0m6.368s 00:24:16.404 sys 0m5.594s 00:24:16.404 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:16.404 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:16.404 ************************************ 00:24:16.404 END TEST nvmf_control_msg_list 00:24:16.404 ************************************ 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:16.700 ************************************ 00:24:16.700 START TEST nvmf_wait_for_buf 00:24:16.700 ************************************ 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:16.700 * Looking for test storage... 00:24:16.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.700 --rc genhtml_branch_coverage=1 00:24:16.700 --rc genhtml_function_coverage=1 00:24:16.700 --rc genhtml_legend=1 00:24:16.700 --rc geninfo_all_blocks=1 00:24:16.700 --rc geninfo_unexecuted_blocks=1 00:24:16.700 00:24:16.700 ' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.700 --rc genhtml_branch_coverage=1 00:24:16.700 --rc genhtml_function_coverage=1 00:24:16.700 --rc genhtml_legend=1 00:24:16.700 --rc geninfo_all_blocks=1 00:24:16.700 --rc geninfo_unexecuted_blocks=1 00:24:16.700 00:24:16.700 ' 00:24:16.700 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:16.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.700 --rc genhtml_branch_coverage=1 00:24:16.700 --rc genhtml_function_coverage=1 00:24:16.701 --rc genhtml_legend=1 00:24:16.701 --rc geninfo_all_blocks=1 00:24:16.701 --rc geninfo_unexecuted_blocks=1 00:24:16.701 00:24:16.701 ' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:16.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.701 --rc genhtml_branch_coverage=1 00:24:16.701 --rc genhtml_function_coverage=1 00:24:16.701 --rc genhtml_legend=1 00:24:16.701 --rc geninfo_all_blocks=1 00:24:16.701 --rc geninfo_unexecuted_blocks=1 00:24:16.701 00:24:16.701 ' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:16.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:16.701 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.006 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:17.006 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:17.006 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:17.006 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:22.299 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:22.299 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:22.299 Found net devices under 0000:af:00.0: cvl_0_0 00:24:22.299 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:22.300 Found net devices under 0000:af:00.1: cvl_0_1 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:22.300 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:22.559 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:22.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:22.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:24:22.559 00:24:22.559 --- 10.0.0.2 ping statistics --- 00:24:22.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.559 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:22.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:22.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:24:22.559 00:24:22.559 --- 10.0.0.1 ping statistics --- 00:24:22.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:22.559 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:22.559 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=1038980 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 1038980 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1038980 ']' 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:22.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:22.818 [2024-12-13 06:30:14.280453] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:24:22.818 [2024-12-13 06:30:14.280494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:22.818 [2024-12-13 06:30:14.344642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:22.818 [2024-12-13 06:30:14.366077] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:22.818 [2024-12-13 06:30:14.366113] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:22.818 [2024-12-13 06:30:14.366120] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:22.818 [2024-12-13 06:30:14.366126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:22.818 [2024-12-13 06:30:14.366132] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:22.818 [2024-12-13 06:30:14.366630] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:22.818 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 Malloc0 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 [2024-12-13 06:30:14.590596] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:23.077 [2024-12-13 06:30:14.618799] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.077 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:23.077 [2024-12-13 06:30:14.705526] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:24.983 Initializing NVMe Controllers 00:24:24.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:24.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:24.983 Initialization complete. Launching workers. 00:24:24.983 ======================================================== 00:24:24.983 Latency(us) 00:24:24.983 Device Information : IOPS MiB/s Average min max 00:24:24.983 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 29.89 3.74 143279.05 47124.42 191534.43 00:24:24.983 ======================================================== 00:24:24.983 Total : 29.89 3.74 143279.05 47124.42 191534.43 00:24:24.983 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=454 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 454 -eq 0 ]] 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.983 rmmod nvme_tcp 00:24:24.983 rmmod nvme_fabrics 00:24:24.983 rmmod nvme_keyring 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 1038980 ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1038980 ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1038980' 00:24:24.983 killing process with pid 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1038980 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:24.983 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:26.887 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.146 00:24:27.146 real 0m10.433s 00:24:27.146 user 0m4.060s 00:24:27.146 sys 0m4.825s 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:27.146 ************************************ 00:24:27.146 END TEST nvmf_wait_for_buf 00:24:27.146 ************************************ 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:27.146 ************************************ 00:24:27.146 START TEST nvmf_fuzz 00:24:27.146 ************************************ 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:27.146 * Looking for test storage... 00:24:27.146 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.146 --rc genhtml_branch_coverage=1 00:24:27.146 --rc genhtml_function_coverage=1 00:24:27.146 --rc genhtml_legend=1 00:24:27.146 --rc geninfo_all_blocks=1 00:24:27.146 --rc geninfo_unexecuted_blocks=1 00:24:27.146 00:24:27.146 ' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.146 --rc genhtml_branch_coverage=1 00:24:27.146 --rc genhtml_function_coverage=1 00:24:27.146 --rc genhtml_legend=1 00:24:27.146 --rc geninfo_all_blocks=1 00:24:27.146 --rc geninfo_unexecuted_blocks=1 00:24:27.146 00:24:27.146 ' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.146 --rc genhtml_branch_coverage=1 00:24:27.146 --rc genhtml_function_coverage=1 00:24:27.146 --rc genhtml_legend=1 00:24:27.146 --rc geninfo_all_blocks=1 00:24:27.146 --rc geninfo_unexecuted_blocks=1 00:24:27.146 00:24:27.146 ' 00:24:27.146 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:27.146 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:27.147 --rc genhtml_branch_coverage=1 00:24:27.147 --rc genhtml_function_coverage=1 00:24:27.147 --rc genhtml_legend=1 00:24:27.147 --rc geninfo_all_blocks=1 00:24:27.147 --rc geninfo_unexecuted_blocks=1 00:24:27.147 00:24:27.147 ' 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.147 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:27.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.407 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:33.976 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:33.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:33.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:33.977 Found net devices under 0000:af:00.0: cvl_0_0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:33.977 Found net devices under 0000:af:00.1: cvl_0_1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:33.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.345 ms 00:24:33.977 00:24:33.977 --- 10.0.0.2 ping statistics --- 00:24:33.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.977 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.977 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.977 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:24:33.977 00:24:33.977 --- 10.0.0.1 ping statistics --- 00:24:33.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.977 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1042692 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1042692 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1042692 ']' 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.977 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.978 Malloc0 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.978 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:33.978 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:06.049 Fuzzing completed. Shutting down the fuzz application 00:25:06.049 00:25:06.049 Dumping successful admin opcodes: 00:25:06.049 9, 10, 00:25:06.049 Dumping successful io opcodes: 00:25:06.049 0, 9, 00:25:06.049 NS: 0x2000008eff00 I/O qp, Total commands completed: 1012557, total successful commands: 5933, random_seed: 3414360960 00:25:06.049 NS: 0x2000008eff00 admin qp, Total commands completed: 132736, total successful commands: 29, random_seed: 560996352 00:25:06.049 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:06.049 Fuzzing completed. Shutting down the fuzz application 00:25:06.049 00:25:06.049 Dumping successful admin opcodes: 00:25:06.049 00:25:06.049 Dumping successful io opcodes: 00:25:06.049 00:25:06.049 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 1699015364 00:25:06.049 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 1699077590 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:06.049 rmmod nvme_tcp 00:25:06.049 rmmod nvme_fabrics 00:25:06.049 rmmod nvme_keyring 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 1042692 ']' 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 1042692 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1042692 ']' 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 1042692 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1042692 00:25:06.049 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1042692' 00:25:06.050 killing process with pid 1042692 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 1042692 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 1042692 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:06.050 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.427 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.427 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:07.427 00:25:07.427 real 0m40.433s 00:25:07.427 user 0m54.031s 00:25:07.427 sys 0m15.656s 00:25:07.427 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.427 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 ************************************ 00:25:07.427 END TEST nvmf_fuzz 00:25:07.427 ************************************ 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:07.687 ************************************ 00:25:07.687 START TEST nvmf_multiconnection 00:25:07.687 ************************************ 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:07.687 * Looking for test storage... 00:25:07.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:07.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.687 --rc genhtml_branch_coverage=1 00:25:07.687 --rc genhtml_function_coverage=1 00:25:07.687 --rc genhtml_legend=1 00:25:07.687 --rc geninfo_all_blocks=1 00:25:07.687 --rc geninfo_unexecuted_blocks=1 00:25:07.687 00:25:07.687 ' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:07.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.687 --rc genhtml_branch_coverage=1 00:25:07.687 --rc genhtml_function_coverage=1 00:25:07.687 --rc genhtml_legend=1 00:25:07.687 --rc geninfo_all_blocks=1 00:25:07.687 --rc geninfo_unexecuted_blocks=1 00:25:07.687 00:25:07.687 ' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:07.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.687 --rc genhtml_branch_coverage=1 00:25:07.687 --rc genhtml_function_coverage=1 00:25:07.687 --rc genhtml_legend=1 00:25:07.687 --rc geninfo_all_blocks=1 00:25:07.687 --rc geninfo_unexecuted_blocks=1 00:25:07.687 00:25:07.687 ' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:07.687 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.687 --rc genhtml_branch_coverage=1 00:25:07.687 --rc genhtml_function_coverage=1 00:25:07.687 --rc genhtml_legend=1 00:25:07.687 --rc geninfo_all_blocks=1 00:25:07.687 --rc geninfo_unexecuted_blocks=1 00:25:07.687 00:25:07.687 ' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.687 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.688 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.688 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:14.258 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:14.259 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:14.259 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:14.259 Found net devices under 0000:af:00.0: cvl_0_0 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:14.259 Found net devices under 0000:af:00.1: cvl_0_1 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:14.259 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:14.259 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.259 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.341 ms 00:25:14.259 00:25:14.259 --- 10.0.0.2 ping statistics --- 00:25:14.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.259 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:14.259 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.259 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.210 ms 00:25:14.259 00:25:14.259 --- 10.0.0.1 ping statistics --- 00:25:14.259 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.259 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=1051255 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 1051255 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 1051255 ']' 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.259 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.259 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.259 [2024-12-13 06:31:05.253040] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:25:14.259 [2024-12-13 06:31:05.253080] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.259 [2024-12-13 06:31:05.333297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.259 [2024-12-13 06:31:05.357329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.259 [2024-12-13 06:31:05.357365] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.259 [2024-12-13 06:31:05.357372] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.260 [2024-12-13 06:31:05.357378] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.260 [2024-12-13 06:31:05.357383] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.260 [2024-12-13 06:31:05.358718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.260 [2024-12-13 06:31:05.358829] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.260 [2024-12-13 06:31:05.358911] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.260 [2024-12-13 06:31:05.358912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 [2024-12-13 06:31:05.491918] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 Malloc1 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 [2024-12-13 06:31:05.552749] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 Malloc2 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 Malloc3 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 Malloc4 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.260 Malloc5 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.260 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 Malloc6 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 Malloc7 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 Malloc8 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.261 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 Malloc9 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 Malloc10 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 Malloc11 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:14.521 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:15.897 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:15.897 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:15.897 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.897 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:15.897 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:17.798 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:18.734 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:18.734 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:18.734 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:18.992 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:18.992 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:20.895 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:22.271 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:22.271 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:22.271 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:22.271 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:22.271 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:24.205 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:25.580 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:25.580 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:25.580 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:25.580 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:25.580 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:27.484 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:28.420 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:28.420 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:28.420 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:28.420 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:28.420 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:30.954 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:30.954 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:30.954 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:30.954 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:30.954 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:30.954 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:30.954 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:30.954 06:31:22 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:31.890 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:31.890 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:31.890 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:31.890 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:31.890 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:33.792 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:33.793 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:35.170 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:35.170 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.170 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.170 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.170 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.075 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:38.453 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:38.453 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.453 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.453 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.453 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.988 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:41.925 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:41.925 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:41.925 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:41.925 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:41.925 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:43.829 06:31:35 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:45.205 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:45.205 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.205 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.205 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.205 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.740 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:49.117 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:49.117 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:49.117 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:49.117 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:49.117 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:51.023 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:51.023 [global] 00:25:51.023 thread=1 00:25:51.023 invalidate=1 00:25:51.023 rw=read 00:25:51.023 time_based=1 00:25:51.023 runtime=10 00:25:51.023 ioengine=libaio 00:25:51.023 direct=1 00:25:51.023 bs=262144 00:25:51.023 iodepth=64 00:25:51.023 norandommap=1 00:25:51.023 numjobs=1 00:25:51.023 00:25:51.023 [job0] 00:25:51.023 filename=/dev/nvme0n1 00:25:51.023 [job1] 00:25:51.023 filename=/dev/nvme10n1 00:25:51.023 [job2] 00:25:51.023 filename=/dev/nvme1n1 00:25:51.023 [job3] 00:25:51.023 filename=/dev/nvme2n1 00:25:51.023 [job4] 00:25:51.023 filename=/dev/nvme3n1 00:25:51.023 [job5] 00:25:51.023 filename=/dev/nvme4n1 00:25:51.023 [job6] 00:25:51.023 filename=/dev/nvme5n1 00:25:51.023 [job7] 00:25:51.023 filename=/dev/nvme6n1 00:25:51.023 [job8] 00:25:51.023 filename=/dev/nvme7n1 00:25:51.023 [job9] 00:25:51.023 filename=/dev/nvme8n1 00:25:51.023 [job10] 00:25:51.023 filename=/dev/nvme9n1 00:25:51.023 Could not set queue depth (nvme0n1) 00:25:51.023 Could not set queue depth (nvme10n1) 00:25:51.023 Could not set queue depth (nvme1n1) 00:25:51.023 Could not set queue depth (nvme2n1) 00:25:51.023 Could not set queue depth (nvme3n1) 00:25:51.023 Could not set queue depth (nvme4n1) 00:25:51.023 Could not set queue depth (nvme5n1) 00:25:51.023 Could not set queue depth (nvme6n1) 00:25:51.023 Could not set queue depth (nvme7n1) 00:25:51.023 Could not set queue depth (nvme8n1) 00:25:51.023 Could not set queue depth (nvme9n1) 00:25:51.282 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:51.282 fio-3.35 00:25:51.282 Starting 11 threads 00:26:03.492 00:26:03.492 job0: (groupid=0, jobs=1): err= 0: pid=1057777: Fri Dec 13 06:31:53 2024 00:26:03.492 read: IOPS=488, BW=122MiB/s (128MB/s)(1233MiB/10090msec) 00:26:03.492 slat (usec): min=9, max=561204, avg=1997.63, stdev=14071.10 00:26:03.492 clat (usec): min=1563, max=1367.4k, avg=128805.45, stdev=187644.81 00:26:03.492 lat (usec): min=1598, max=1367.4k, avg=130803.08, stdev=190459.25 00:26:03.492 clat percentiles (msec): 00:26:03.492 | 1.00th=[ 24], 5.00th=[ 26], 10.00th=[ 28], 20.00th=[ 29], 00:26:03.492 | 30.00th=[ 32], 40.00th=[ 34], 50.00th=[ 42], 60.00th=[ 66], 00:26:03.492 | 70.00th=[ 91], 80.00th=[ 146], 90.00th=[ 405], 95.00th=[ 468], 00:26:03.492 | 99.00th=[ 1020], 99.50th=[ 1133], 99.90th=[ 1234], 99.95th=[ 1267], 00:26:03.492 | 99.99th=[ 1368] 00:26:03.492 bw ( KiB/s): min=11264, max=557568, per=16.63%, avg=124569.60, stdev=159202.59, samples=20 00:26:03.492 iops : min= 44, max= 2178, avg=486.60, stdev=621.89, samples=20 00:26:03.492 lat (msec) : 2=0.04%, 4=0.08%, 10=0.02%, 20=0.08%, 50=55.21% 00:26:03.492 lat (msec) : 100=17.81%, 250=9.09%, 500=13.35%, 750=2.58%, 1000=0.55% 00:26:03.492 lat (msec) : 2000=1.20% 00:26:03.492 cpu : usr=0.14%, sys=1.92%, ctx=700, majf=0, minf=4097 00:26:03.492 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:03.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.492 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.492 issued rwts: total=4930,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.492 job1: (groupid=0, jobs=1): err= 0: pid=1057779: Fri Dec 13 06:31:53 2024 00:26:03.492 read: IOPS=180, BW=45.2MiB/s (47.4MB/s)(462MiB/10230msec) 00:26:03.492 slat (usec): min=15, max=508542, avg=4097.09, stdev=21217.65 00:26:03.492 clat (usec): min=1725, max=1419.6k, avg=349539.31, stdev=270032.01 00:26:03.492 lat (usec): min=1801, max=1519.3k, avg=353636.40, stdev=272895.88 00:26:03.492 clat percentiles (msec): 00:26:03.492 | 1.00th=[ 9], 5.00th=[ 13], 10.00th=[ 27], 20.00th=[ 106], 00:26:03.492 | 30.00th=[ 184], 40.00th=[ 279], 50.00th=[ 321], 60.00th=[ 359], 00:26:03.492 | 70.00th=[ 409], 80.00th=[ 550], 90.00th=[ 684], 95.00th=[ 911], 00:26:03.492 | 99.00th=[ 1217], 99.50th=[ 1334], 99.90th=[ 1418], 99.95th=[ 1418], 00:26:03.492 | 99.99th=[ 1418] 00:26:03.492 bw ( KiB/s): min= 8192, max=173568, per=6.10%, avg=45696.00, stdev=36404.86, samples=20 00:26:03.492 iops : min= 32, max= 678, avg=178.50, stdev=142.21, samples=20 00:26:03.492 lat (msec) : 2=0.05%, 4=0.27%, 10=0.97%, 20=7.68%, 50=6.98% 00:26:03.492 lat (msec) : 100=3.52%, 250=16.66%, 500=40.83%, 750=15.95%, 1000=4.16% 00:26:03.492 lat (msec) : 2000=2.92% 00:26:03.492 cpu : usr=0.08%, sys=0.76%, ctx=346, majf=0, minf=4098 00:26:03.492 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.9%, 32=1.7%, >=64=96.6% 00:26:03.492 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.492 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.492 issued rwts: total=1849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.492 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.492 job2: (groupid=0, jobs=1): err= 0: pid=1057780: Fri Dec 13 06:31:53 2024 00:26:03.492 read: IOPS=313, BW=78.3MiB/s (82.1MB/s)(801MiB/10224msec) 00:26:03.492 slat (usec): min=9, max=755727, avg=2159.08, stdev=22829.65 00:26:03.492 clat (usec): min=957, max=1366.3k, avg=201960.89, stdev=272262.03 00:26:03.492 lat (usec): min=997, max=1366.4k, avg=204119.96, stdev=274203.21 00:26:03.492 clat percentiles (usec): 00:26:03.492 | 1.00th=[ 1418], 5.00th=[ 2933], 10.00th=[ 8717], 00:26:03.492 | 20.00th=[ 26870], 30.00th=[ 42206], 40.00th=[ 45351], 00:26:03.492 | 50.00th=[ 58459], 60.00th=[ 78119], 70.00th=[ 233833], 00:26:03.492 | 80.00th=[ 404751], 90.00th=[ 534774], 95.00th=[ 759170], 00:26:03.492 | 99.00th=[1350566], 99.50th=[1367344], 99.90th=[1367344], 00:26:03.493 | 99.95th=[1367344], 99.99th=[1367344] 00:26:03.493 bw ( KiB/s): min=10240, max=299008, per=11.29%, avg=84560.84, stdev=83889.96, samples=19 00:26:03.493 iops : min= 40, max= 1168, avg=330.32, stdev=327.70, samples=19 00:26:03.493 lat (usec) : 1000=0.03% 00:26:03.493 lat (msec) : 2=2.31%, 4=4.06%, 10=4.43%, 20=6.75%, 50=28.42% 00:26:03.493 lat (msec) : 100=16.21%, 250=8.37%, 500=18.08%, 750=6.15%, 1000=2.19% 00:26:03.493 lat (msec) : 2000=3.00% 00:26:03.493 cpu : usr=0.17%, sys=1.17%, ctx=1220, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=3202,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job3: (groupid=0, jobs=1): err= 0: pid=1057781: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=218, BW=54.6MiB/s (57.2MB/s)(550MiB/10085msec) 00:26:03.493 slat (usec): min=18, max=628028, avg=2118.74, stdev=19029.62 00:26:03.493 clat (usec): min=1357, max=1395.7k, avg=290701.00, stdev=307971.63 00:26:03.493 lat (usec): min=1411, max=1554.7k, avg=292819.74, stdev=310628.50 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 5], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 27], 00:26:03.493 | 30.00th=[ 61], 40.00th=[ 107], 50.00th=[ 148], 60.00th=[ 230], 00:26:03.493 | 70.00th=[ 472], 80.00th=[ 567], 90.00th=[ 718], 95.00th=[ 860], 00:26:03.493 | 99.00th=[ 1284], 99.50th=[ 1301], 99.90th=[ 1401], 99.95th=[ 1401], 00:26:03.493 | 99.99th=[ 1401] 00:26:03.493 bw ( KiB/s): min=10240, max=190976, per=7.31%, avg=54736.05, stdev=50944.02, samples=20 00:26:03.493 iops : min= 40, max= 746, avg=213.80, stdev=199.01, samples=20 00:26:03.493 lat (msec) : 2=0.14%, 4=0.55%, 10=11.72%, 20=6.09%, 50=9.90% 00:26:03.493 lat (msec) : 100=10.63%, 250=22.99%, 500=9.95%, 750=19.08%, 1000=5.32% 00:26:03.493 lat (msec) : 2000=3.63% 00:26:03.493 cpu : usr=0.07%, sys=0.86%, ctx=738, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=2201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job4: (groupid=0, jobs=1): err= 0: pid=1057782: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=256, BW=64.1MiB/s (67.2MB/s)(656MiB/10226msec) 00:26:03.493 slat (usec): min=15, max=435625, avg=2282.08, stdev=18322.76 00:26:03.493 clat (usec): min=525, max=1365.4k, avg=246805.08, stdev=299011.16 00:26:03.493 lat (usec): min=553, max=1365.5k, avg=249087.15, stdev=301719.20 00:26:03.493 clat percentiles (usec): 00:26:03.493 | 1.00th=[ 676], 5.00th=[ 3490], 10.00th=[ 6194], 00:26:03.493 | 20.00th=[ 9765], 30.00th=[ 12256], 40.00th=[ 38011], 00:26:03.493 | 50.00th=[ 80217], 60.00th=[ 179307], 70.00th=[ 408945], 00:26:03.493 | 80.00th=[ 549454], 90.00th=[ 742392], 95.00th=[ 843056], 00:26:03.493 | 99.00th=[1035994], 99.50th=[1082131], 99.90th=[1166017], 00:26:03.493 | 99.95th=[1166017], 99.99th=[1367344] 00:26:03.493 bw ( KiB/s): min=11264, max=330752, per=8.74%, avg=65510.40, stdev=74399.83, samples=20 00:26:03.493 iops : min= 44, max= 1292, avg=255.90, stdev=290.62, samples=20 00:26:03.493 lat (usec) : 750=1.41%, 1000=0.23% 00:26:03.493 lat (msec) : 2=1.49%, 4=2.48%, 10=15.10%, 20=12.62%, 50=11.28% 00:26:03.493 lat (msec) : 100=8.50%, 250=13.65%, 500=9.38%, 750=14.49%, 1000=7.97% 00:26:03.493 lat (msec) : 2000=1.41% 00:26:03.493 cpu : usr=0.07%, sys=0.93%, ctx=1300, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=2623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job5: (groupid=0, jobs=1): err= 0: pid=1057783: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=125, BW=31.3MiB/s (32.8MB/s)(320MiB/10222msec) 00:26:03.493 slat (usec): min=14, max=534823, avg=7092.86, stdev=27992.34 00:26:03.493 clat (msec): min=14, max=1162, avg=503.68, stdev=266.04 00:26:03.493 lat (msec): min=14, max=1359, avg=510.77, stdev=270.05 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 19], 5.00th=[ 26], 10.00th=[ 53], 20.00th=[ 241], 00:26:03.493 | 30.00th=[ 443], 40.00th=[ 485], 50.00th=[ 514], 60.00th=[ 550], 00:26:03.493 | 70.00th=[ 634], 80.00th=[ 735], 90.00th=[ 877], 95.00th=[ 911], 00:26:03.493 | 99.00th=[ 1011], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1167], 00:26:03.493 | 99.99th=[ 1167] 00:26:03.493 bw ( KiB/s): min=15872, max=89088, per=4.37%, avg=32741.05, stdev=16329.22, samples=19 00:26:03.493 iops : min= 62, max= 348, avg=127.89, stdev=63.79, samples=19 00:26:03.493 lat (msec) : 20=1.95%, 50=7.58%, 100=4.14%, 250=6.49%, 500=24.71% 00:26:03.493 lat (msec) : 750=36.36%, 1000=16.58%, 2000=2.19% 00:26:03.493 cpu : usr=0.05%, sys=0.68%, ctx=226, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.6%, 16=1.3%, 32=2.5%, >=64=95.1% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=1279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job6: (groupid=0, jobs=1): err= 0: pid=1057784: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=289, BW=72.5MiB/s (76.0MB/s)(744MiB/10266msec) 00:26:03.493 slat (usec): min=16, max=255690, avg=3121.49, stdev=15763.00 00:26:03.493 clat (msec): min=7, max=1242, avg=217.30, stdev=266.66 00:26:03.493 lat (msec): min=8, max=1310, avg=220.42, stdev=270.33 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 14], 5.00th=[ 34], 10.00th=[ 36], 20.00th=[ 39], 00:26:03.493 | 30.00th=[ 40], 40.00th=[ 44], 50.00th=[ 47], 60.00th=[ 111], 00:26:03.493 | 70.00th=[ 234], 80.00th=[ 485], 90.00th=[ 600], 95.00th=[ 751], 00:26:03.493 | 99.00th=[ 1099], 99.50th=[ 1133], 99.90th=[ 1250], 99.95th=[ 1250], 00:26:03.493 | 99.99th=[ 1250] 00:26:03.493 bw ( KiB/s): min=10240, max=396288, per=9.95%, avg=74547.20, stdev=114825.11, samples=20 00:26:03.493 iops : min= 40, max= 1548, avg=291.20, stdev=448.54, samples=20 00:26:03.493 lat (msec) : 10=0.27%, 20=2.05%, 50=50.60%, 100=6.82%, 250=10.55% 00:26:03.493 lat (msec) : 500=11.26%, 750=13.64%, 1000=3.53%, 2000=1.28% 00:26:03.493 cpu : usr=0.09%, sys=1.28%, ctx=350, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=2976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job7: (groupid=0, jobs=1): err= 0: pid=1057785: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=224, BW=56.2MiB/s (58.9MB/s)(575MiB/10227msec) 00:26:03.493 slat (usec): min=18, max=258206, avg=4125.44, stdev=18689.05 00:26:03.493 clat (msec): min=7, max=1120, avg=280.11, stdev=269.22 00:26:03.493 lat (msec): min=7, max=1121, avg=284.23, stdev=273.39 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 14], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 37], 00:26:03.493 | 30.00th=[ 43], 40.00th=[ 99], 50.00th=[ 159], 60.00th=[ 266], 00:26:03.493 | 70.00th=[ 472], 80.00th=[ 542], 90.00th=[ 642], 95.00th=[ 793], 00:26:03.493 | 99.00th=[ 961], 99.50th=[ 1070], 99.90th=[ 1116], 99.95th=[ 1116], 00:26:03.493 | 99.99th=[ 1116] 00:26:03.493 bw ( KiB/s): min=15360, max=420352, per=7.64%, avg=57216.00, stdev=89978.98, samples=20 00:26:03.493 iops : min= 60, max= 1642, avg=223.50, stdev=351.48, samples=20 00:26:03.493 lat (msec) : 10=0.22%, 20=2.52%, 50=30.93%, 100=6.39%, 250=18.40% 00:26:03.493 lat (msec) : 500=14.88%, 750=19.70%, 1000=6.44%, 2000=0.52% 00:26:03.493 cpu : usr=0.14%, sys=0.97%, ctx=345, majf=0, minf=3722 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=2299,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job8: (groupid=0, jobs=1): err= 0: pid=1057787: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=193, BW=48.5MiB/s (50.9MB/s)(496MiB/10227msec) 00:26:03.493 slat (usec): min=17, max=487750, avg=3964.22, stdev=22670.67 00:26:03.493 clat (usec): min=1311, max=1310.9k, avg=325505.56, stdev=278014.67 00:26:03.493 lat (usec): min=1355, max=1311.0k, avg=329469.78, stdev=280614.71 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 3], 5.00th=[ 14], 10.00th=[ 23], 20.00th=[ 41], 00:26:03.493 | 30.00th=[ 136], 40.00th=[ 255], 50.00th=[ 296], 60.00th=[ 330], 00:26:03.493 | 70.00th=[ 380], 80.00th=[ 481], 90.00th=[ 743], 95.00th=[ 936], 00:26:03.493 | 99.00th=[ 1116], 99.50th=[ 1150], 99.90th=[ 1318], 99.95th=[ 1318], 00:26:03.493 | 99.99th=[ 1318] 00:26:03.493 bw ( KiB/s): min= 6144, max=194048, per=6.56%, avg=49148.80, stdev=37910.66, samples=20 00:26:03.493 iops : min= 24, max= 758, avg=191.95, stdev=148.11, samples=20 00:26:03.493 lat (msec) : 2=0.40%, 4=0.96%, 10=3.12%, 20=1.51%, 50=14.21% 00:26:03.493 lat (msec) : 100=5.95%, 250=13.46%, 500=41.23%, 750=9.38%, 1000=5.70% 00:26:03.493 lat (msec) : 2000=4.08% 00:26:03.493 cpu : usr=0.07%, sys=0.89%, ctx=618, majf=0, minf=4097 00:26:03.493 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.8% 00:26:03.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.493 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.493 issued rwts: total=1984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.493 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.493 job9: (groupid=0, jobs=1): err= 0: pid=1057788: Fri Dec 13 06:31:53 2024 00:26:03.493 read: IOPS=283, BW=70.8MiB/s (74.3MB/s)(725MiB/10227msec) 00:26:03.493 slat (usec): min=20, max=189660, avg=3176.27, stdev=14980.19 00:26:03.493 clat (msec): min=9, max=1223, avg=222.37, stdev=247.53 00:26:03.493 lat (msec): min=9, max=1223, avg=225.55, stdev=250.99 00:26:03.493 clat percentiles (msec): 00:26:03.493 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 29], 20.00th=[ 34], 00:26:03.493 | 30.00th=[ 42], 40.00th=[ 51], 50.00th=[ 86], 60.00th=[ 128], 00:26:03.493 | 70.00th=[ 355], 80.00th=[ 493], 90.00th=[ 592], 95.00th=[ 709], 00:26:03.493 | 99.00th=[ 877], 99.50th=[ 902], 99.90th=[ 1070], 99.95th=[ 1116], 00:26:03.493 | 99.99th=[ 1217] 00:26:03.493 bw ( KiB/s): min=16896, max=356040, per=9.69%, avg=72586.00, stdev=86311.12, samples=20 00:26:03.493 iops : min= 66, max= 1390, avg=283.50, stdev=337.02, samples=20 00:26:03.494 lat (msec) : 10=0.03%, 20=4.07%, 50=35.89%, 100=12.32%, 250=15.98% 00:26:03.494 lat (msec) : 500=12.15%, 750=16.32%, 1000=2.93%, 2000=0.31% 00:26:03.494 cpu : usr=0.16%, sys=1.16%, ctx=483, majf=0, minf=4097 00:26:03.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:03.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.494 issued rwts: total=2898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.494 job10: (groupid=0, jobs=1): err= 0: pid=1057789: Fri Dec 13 06:31:53 2024 00:26:03.494 read: IOPS=376, BW=94.0MiB/s (98.6MB/s)(950MiB/10108msec) 00:26:03.494 slat (usec): min=15, max=538411, avg=1997.63, stdev=14411.24 00:26:03.494 clat (usec): min=945, max=1150.7k, avg=168030.44, stdev=175134.89 00:26:03.494 lat (usec): min=989, max=1150.7k, avg=170028.06, stdev=176723.44 00:26:03.494 clat percentiles (msec): 00:26:03.494 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 17], 20.00th=[ 35], 00:26:03.494 | 30.00th=[ 65], 40.00th=[ 84], 50.00th=[ 116], 60.00th=[ 144], 00:26:03.494 | 70.00th=[ 228], 80.00th=[ 292], 90.00th=[ 351], 95.00th=[ 397], 00:26:03.494 | 99.00th=[ 1083], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1150], 00:26:03.494 | 99.99th=[ 1150] 00:26:03.494 bw ( KiB/s): min= 2560, max=217600, per=12.77%, avg=95698.25, stdev=62274.40, samples=20 00:26:03.494 iops : min= 10, max= 850, avg=373.80, stdev=243.27, samples=20 00:26:03.494 lat (usec) : 1000=0.03% 00:26:03.494 lat (msec) : 2=0.84%, 4=3.34%, 10=3.79%, 20=2.76%, 50=11.47% 00:26:03.494 lat (msec) : 100=24.70%, 250=25.31%, 500=24.07%, 750=2.21%, 1000=0.05% 00:26:03.494 lat (msec) : 2000=1.42% 00:26:03.494 cpu : usr=0.14%, sys=1.54%, ctx=983, majf=0, minf=4097 00:26:03.494 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:26:03.494 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:03.494 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:03.494 issued rwts: total=3801,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:03.494 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:03.494 00:26:03.494 Run status group 0 (all jobs): 00:26:03.494 READ: bw=732MiB/s (767MB/s), 31.3MiB/s-122MiB/s (32.8MB/s-128MB/s), io=7511MiB (7875MB), run=10085-10266msec 00:26:03.494 00:26:03.494 Disk stats (read/write): 00:26:03.494 nvme0n1: ios=9672/0, merge=0/0, ticks=1231097/0, in_queue=1231097, util=94.98% 00:26:03.494 nvme10n1: ios=3581/0, merge=0/0, ticks=1211961/0, in_queue=1211961, util=95.52% 00:26:03.494 nvme1n1: ios=6354/0, merge=0/0, ticks=1246435/0, in_queue=1246435, util=96.12% 00:26:03.494 nvme2n1: ios=4221/0, merge=0/0, ticks=1238668/0, in_queue=1238668, util=96.36% 00:26:03.494 nvme3n1: ios=5201/0, merge=0/0, ticks=1257825/0, in_queue=1257825, util=96.69% 00:26:03.494 nvme4n1: ios=2492/0, merge=0/0, ticks=1246683/0, in_queue=1246683, util=97.45% 00:26:03.494 nvme5n1: ios=5885/0, merge=0/0, ticks=1250037/0, in_queue=1250037, util=97.87% 00:26:03.494 nvme6n1: ios=4535/0, merge=0/0, ticks=1250800/0, in_queue=1250800, util=98.17% 00:26:03.494 nvme7n1: ios=3886/0, merge=0/0, ticks=1246433/0, in_queue=1246433, util=98.97% 00:26:03.494 nvme8n1: ios=5706/0, merge=0/0, ticks=1233036/0, in_queue=1233036, util=99.15% 00:26:03.494 nvme9n1: ios=7449/0, merge=0/0, ticks=1214072/0, in_queue=1214072, util=99.23% 00:26:03.494 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:03.494 [global] 00:26:03.494 thread=1 00:26:03.494 invalidate=1 00:26:03.494 rw=randwrite 00:26:03.494 time_based=1 00:26:03.494 runtime=10 00:26:03.494 ioengine=libaio 00:26:03.494 direct=1 00:26:03.494 bs=262144 00:26:03.494 iodepth=64 00:26:03.494 norandommap=1 00:26:03.494 numjobs=1 00:26:03.494 00:26:03.494 [job0] 00:26:03.494 filename=/dev/nvme0n1 00:26:03.494 [job1] 00:26:03.494 filename=/dev/nvme10n1 00:26:03.494 [job2] 00:26:03.494 filename=/dev/nvme1n1 00:26:03.494 [job3] 00:26:03.494 filename=/dev/nvme2n1 00:26:03.494 [job4] 00:26:03.494 filename=/dev/nvme3n1 00:26:03.494 [job5] 00:26:03.494 filename=/dev/nvme4n1 00:26:03.494 [job6] 00:26:03.494 filename=/dev/nvme5n1 00:26:03.494 [job7] 00:26:03.494 filename=/dev/nvme6n1 00:26:03.494 [job8] 00:26:03.494 filename=/dev/nvme7n1 00:26:03.494 [job9] 00:26:03.494 filename=/dev/nvme8n1 00:26:03.494 [job10] 00:26:03.494 filename=/dev/nvme9n1 00:26:03.494 Could not set queue depth (nvme0n1) 00:26:03.494 Could not set queue depth (nvme10n1) 00:26:03.494 Could not set queue depth (nvme1n1) 00:26:03.494 Could not set queue depth (nvme2n1) 00:26:03.494 Could not set queue depth (nvme3n1) 00:26:03.494 Could not set queue depth (nvme4n1) 00:26:03.494 Could not set queue depth (nvme5n1) 00:26:03.494 Could not set queue depth (nvme6n1) 00:26:03.494 Could not set queue depth (nvme7n1) 00:26:03.494 Could not set queue depth (nvme8n1) 00:26:03.494 Could not set queue depth (nvme9n1) 00:26:03.494 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:03.494 fio-3.35 00:26:03.494 Starting 11 threads 00:26:13.653 00:26:13.653 job0: (groupid=0, jobs=1): err= 0: pid=1058815: Fri Dec 13 06:32:04 2024 00:26:13.653 write: IOPS=383, BW=95.9MiB/s (101MB/s)(970MiB/10110msec); 0 zone resets 00:26:13.653 slat (usec): min=19, max=57644, avg=1984.80, stdev=4841.72 00:26:13.653 clat (usec): min=788, max=421190, avg=164716.47, stdev=86624.56 00:26:13.653 lat (usec): min=833, max=422228, avg=166701.27, stdev=87648.21 00:26:13.653 clat percentiles (msec): 00:26:13.653 | 1.00th=[ 5], 5.00th=[ 20], 10.00th=[ 59], 20.00th=[ 87], 00:26:13.653 | 30.00th=[ 109], 40.00th=[ 142], 50.00th=[ 167], 60.00th=[ 180], 00:26:13.653 | 70.00th=[ 199], 80.00th=[ 239], 90.00th=[ 292], 95.00th=[ 317], 00:26:13.653 | 99.00th=[ 376], 99.50th=[ 401], 99.90th=[ 414], 99.95th=[ 418], 00:26:13.653 | 99.99th=[ 422] 00:26:13.653 bw ( KiB/s): min=50688, max=176128, per=9.38%, avg=97689.60, stdev=37990.77, samples=20 00:26:13.653 iops : min= 198, max= 688, avg=381.60, stdev=148.40, samples=20 00:26:13.653 lat (usec) : 1000=0.10% 00:26:13.653 lat (msec) : 2=0.36%, 4=0.54%, 10=2.58%, 20=1.49%, 50=3.61% 00:26:13.653 lat (msec) : 100=17.65%, 250=57.32%, 500=16.34% 00:26:13.653 cpu : usr=0.80%, sys=1.21%, ctx=1841, majf=0, minf=1 00:26:13.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:13.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.653 issued rwts: total=0,3880,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.653 job1: (groupid=0, jobs=1): err= 0: pid=1058827: Fri Dec 13 06:32:04 2024 00:26:13.653 write: IOPS=300, BW=75.1MiB/s (78.8MB/s)(766MiB/10190msec); 0 zone resets 00:26:13.653 slat (usec): min=19, max=138278, avg=2722.11, stdev=7180.88 00:26:13.653 clat (msec): min=6, max=743, avg=210.03, stdev=122.95 00:26:13.653 lat (msec): min=8, max=743, avg=212.75, stdev=124.57 00:26:13.653 clat percentiles (msec): 00:26:13.653 | 1.00th=[ 31], 5.00th=[ 80], 10.00th=[ 86], 20.00th=[ 110], 00:26:13.653 | 30.00th=[ 126], 40.00th=[ 157], 50.00th=[ 197], 60.00th=[ 213], 00:26:13.653 | 70.00th=[ 232], 80.00th=[ 288], 90.00th=[ 380], 95.00th=[ 472], 00:26:13.653 | 99.00th=[ 575], 99.50th=[ 634], 99.90th=[ 718], 99.95th=[ 743], 00:26:13.653 | 99.99th=[ 743] 00:26:13.653 bw ( KiB/s): min=27136, max=175104, per=7.37%, avg=76800.00, stdev=37000.73, samples=20 00:26:13.653 iops : min= 106, max= 684, avg=300.00, stdev=144.53, samples=20 00:26:13.653 lat (msec) : 10=0.10%, 20=0.36%, 50=1.99%, 100=13.71%, 250=60.10% 00:26:13.653 lat (msec) : 500=19.13%, 750=4.60% 00:26:13.653 cpu : usr=0.78%, sys=1.04%, ctx=1210, majf=0, minf=1 00:26:13.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=97.9% 00:26:13.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.653 issued rwts: total=0,3063,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.653 job2: (groupid=0, jobs=1): err= 0: pid=1058828: Fri Dec 13 06:32:04 2024 00:26:13.653 write: IOPS=313, BW=78.3MiB/s (82.1MB/s)(798MiB/10195msec); 0 zone resets 00:26:13.653 slat (usec): min=24, max=88956, avg=3003.65, stdev=6769.46 00:26:13.653 clat (msec): min=9, max=682, avg=201.22, stdev=120.42 00:26:13.653 lat (msec): min=11, max=682, avg=204.23, stdev=122.06 00:26:13.653 clat percentiles (msec): 00:26:13.653 | 1.00th=[ 37], 5.00th=[ 41], 10.00th=[ 46], 20.00th=[ 99], 00:26:13.653 | 30.00th=[ 126], 40.00th=[ 146], 50.00th=[ 174], 60.00th=[ 215], 00:26:13.653 | 70.00th=[ 271], 80.00th=[ 313], 90.00th=[ 355], 95.00th=[ 418], 00:26:13.653 | 99.00th=[ 550], 99.50th=[ 567], 99.90th=[ 651], 99.95th=[ 684], 00:26:13.653 | 99.99th=[ 684] 00:26:13.653 bw ( KiB/s): min=32768, max=250880, per=7.69%, avg=80085.25, stdev=51003.52, samples=20 00:26:13.653 iops : min= 128, max= 980, avg=312.80, stdev=199.23, samples=20 00:26:13.653 lat (msec) : 10=0.03%, 20=0.34%, 50=10.96%, 100=9.24%, 250=46.46% 00:26:13.653 lat (msec) : 500=30.95%, 750=2.01% 00:26:13.653 cpu : usr=0.83%, sys=1.07%, ctx=931, majf=0, minf=1 00:26:13.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:13.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.653 issued rwts: total=0,3192,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.653 job3: (groupid=0, jobs=1): err= 0: pid=1058829: Fri Dec 13 06:32:04 2024 00:26:13.653 write: IOPS=640, BW=160MiB/s (168MB/s)(1621MiB/10124msec); 0 zone resets 00:26:13.653 slat (usec): min=19, max=78702, avg=1046.09, stdev=3831.48 00:26:13.653 clat (usec): min=970, max=578513, avg=98859.21, stdev=100615.82 00:26:13.653 lat (usec): min=1198, max=582588, avg=99905.30, stdev=101675.81 00:26:13.653 clat percentiles (msec): 00:26:13.653 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 17], 20.00th=[ 32], 00:26:13.653 | 30.00th=[ 42], 40.00th=[ 46], 50.00th=[ 55], 60.00th=[ 65], 00:26:13.653 | 70.00th=[ 105], 80.00th=[ 174], 90.00th=[ 253], 95.00th=[ 317], 00:26:13.653 | 99.00th=[ 418], 99.50th=[ 439], 99.90th=[ 542], 99.95th=[ 567], 00:26:13.653 | 99.99th=[ 575] 00:26:13.653 bw ( KiB/s): min=39424, max=410624, per=15.78%, avg=164352.00, stdev=112015.54, samples=20 00:26:13.653 iops : min= 154, max= 1604, avg=642.00, stdev=437.56, samples=20 00:26:13.653 lat (usec) : 1000=0.02% 00:26:13.653 lat (msec) : 2=0.34%, 4=1.37%, 10=5.12%, 20=4.80%, 50=34.06% 00:26:13.653 lat (msec) : 100=23.57%, 250=20.02%, 500=10.46%, 750=0.25% 00:26:13.653 cpu : usr=1.24%, sys=1.98%, ctx=3741, majf=0, minf=1 00:26:13.653 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:26:13.653 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.653 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.653 issued rwts: total=0,6483,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.653 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.653 job4: (groupid=0, jobs=1): err= 0: pid=1058830: Fri Dec 13 06:32:04 2024 00:26:13.653 write: IOPS=425, BW=106MiB/s (111MB/s)(1076MiB/10125msec); 0 zone resets 00:26:13.653 slat (usec): min=23, max=75797, avg=1896.73, stdev=4821.92 00:26:13.653 clat (usec): min=631, max=433001, avg=148573.15, stdev=97107.10 00:26:13.653 lat (usec): min=670, max=437806, avg=150469.88, stdev=98171.25 00:26:13.653 clat percentiles (msec): 00:26:13.654 | 1.00th=[ 4], 5.00th=[ 35], 10.00th=[ 39], 20.00th=[ 47], 00:26:13.654 | 30.00th=[ 61], 40.00th=[ 118], 50.00th=[ 142], 60.00th=[ 169], 00:26:13.654 | 70.00th=[ 192], 80.00th=[ 234], 90.00th=[ 300], 95.00th=[ 326], 00:26:13.654 | 99.00th=[ 401], 99.50th=[ 418], 99.90th=[ 430], 99.95th=[ 430], 00:26:13.654 | 99.99th=[ 435] 00:26:13.654 bw ( KiB/s): min=53248, max=323072, per=10.42%, avg=108569.60, stdev=69565.88, samples=20 00:26:13.654 iops : min= 208, max= 1262, avg=424.10, stdev=271.74, samples=20 00:26:13.654 lat (usec) : 750=0.02%, 1000=0.09% 00:26:13.654 lat (msec) : 2=0.44%, 4=0.51%, 10=0.14%, 20=0.07%, 50=24.76% 00:26:13.654 lat (msec) : 100=10.92%, 250=47.13%, 500=15.91% 00:26:13.654 cpu : usr=1.03%, sys=1.32%, ctx=1780, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.5% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,4305,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job5: (groupid=0, jobs=1): err= 0: pid=1058831: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=266, BW=66.5MiB/s (69.8MB/s)(674MiB/10134msec); 0 zone resets 00:26:13.654 slat (usec): min=29, max=228577, avg=3345.82, stdev=9003.65 00:26:13.654 clat (usec): min=1291, max=670163, avg=236762.72, stdev=105504.38 00:26:13.654 lat (usec): min=1346, max=670303, avg=240108.54, stdev=106685.89 00:26:13.654 clat percentiles (msec): 00:26:13.654 | 1.00th=[ 6], 5.00th=[ 68], 10.00th=[ 120], 20.00th=[ 167], 00:26:13.654 | 30.00th=[ 194], 40.00th=[ 211], 50.00th=[ 224], 60.00th=[ 241], 00:26:13.654 | 70.00th=[ 266], 80.00th=[ 300], 90.00th=[ 380], 95.00th=[ 439], 00:26:13.654 | 99.00th=[ 550], 99.50th=[ 634], 99.90th=[ 667], 99.95th=[ 667], 00:26:13.654 | 99.99th=[ 667] 00:26:13.654 bw ( KiB/s): min=33280, max=115712, per=6.47%, avg=67430.40, stdev=20171.21, samples=20 00:26:13.654 iops : min= 130, max= 452, avg=263.40, stdev=78.79, samples=20 00:26:13.654 lat (msec) : 2=0.11%, 4=0.52%, 10=1.11%, 20=0.33%, 50=1.97% 00:26:13.654 lat (msec) : 100=3.56%, 250=57.55%, 500=32.52%, 750=2.34% 00:26:13.654 cpu : usr=0.66%, sys=0.86%, ctx=891, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,2697,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job6: (groupid=0, jobs=1): err= 0: pid=1058832: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=302, BW=75.7MiB/s (79.4MB/s)(772MiB/10190msec); 0 zone resets 00:26:13.654 slat (usec): min=28, max=200193, avg=2500.37, stdev=8342.19 00:26:13.654 clat (usec): min=1383, max=719659, avg=208645.36, stdev=125343.35 00:26:13.654 lat (usec): min=1433, max=719732, avg=211145.73, stdev=126507.60 00:26:13.654 clat percentiles (msec): 00:26:13.654 | 1.00th=[ 4], 5.00th=[ 68], 10.00th=[ 84], 20.00th=[ 88], 00:26:13.654 | 30.00th=[ 118], 40.00th=[ 144], 50.00th=[ 182], 60.00th=[ 228], 00:26:13.654 | 70.00th=[ 275], 80.00th=[ 330], 90.00th=[ 380], 95.00th=[ 426], 00:26:13.654 | 99.00th=[ 550], 99.50th=[ 592], 99.90th=[ 693], 99.95th=[ 718], 00:26:13.654 | 99.99th=[ 718] 00:26:13.654 bw ( KiB/s): min=29184, max=176128, per=7.43%, avg=77388.80, stdev=39199.65, samples=20 00:26:13.654 iops : min= 114, max= 688, avg=302.30, stdev=153.12, samples=20 00:26:13.654 lat (msec) : 2=0.13%, 4=1.39%, 10=2.14%, 20=0.55%, 50=0.26% 00:26:13.654 lat (msec) : 100=17.82%, 250=43.13%, 500=32.60%, 750=1.98% 00:26:13.654 cpu : usr=0.70%, sys=0.91%, ctx=1298, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,3086,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job7: (groupid=0, jobs=1): err= 0: pid=1058833: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=278, BW=69.7MiB/s (73.1MB/s)(711MiB/10193msec); 0 zone resets 00:26:13.654 slat (usec): min=24, max=100376, avg=2707.94, stdev=7284.32 00:26:13.654 clat (usec): min=1079, max=695782, avg=226561.00, stdev=128815.00 00:26:13.654 lat (usec): min=1141, max=695822, avg=229268.93, stdev=130567.90 00:26:13.654 clat percentiles (msec): 00:26:13.654 | 1.00th=[ 22], 5.00th=[ 52], 10.00th=[ 70], 20.00th=[ 111], 00:26:13.654 | 30.00th=[ 128], 40.00th=[ 165], 50.00th=[ 220], 60.00th=[ 249], 00:26:13.654 | 70.00th=[ 309], 80.00th=[ 334], 90.00th=[ 401], 95.00th=[ 464], 00:26:13.654 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 667], 99.95th=[ 693], 00:26:13.654 | 99.99th=[ 693] 00:26:13.654 bw ( KiB/s): min=30720, max=144896, per=6.83%, avg=71150.75, stdev=32324.74, samples=20 00:26:13.654 iops : min= 120, max= 566, avg=277.90, stdev=126.26, samples=20 00:26:13.654 lat (msec) : 2=0.21%, 20=0.60%, 50=3.90%, 100=11.92%, 250=43.72% 00:26:13.654 lat (msec) : 500=37.07%, 750=2.57% 00:26:13.654 cpu : usr=0.68%, sys=0.87%, ctx=1382, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,2843,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job8: (groupid=0, jobs=1): err= 0: pid=1058834: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=344, BW=86.2MiB/s (90.4MB/s)(878MiB/10181msec); 0 zone resets 00:26:13.654 slat (usec): min=23, max=136935, avg=2007.08, stdev=6995.85 00:26:13.654 clat (usec): min=618, max=661538, avg=183478.79, stdev=140486.28 00:26:13.654 lat (usec): min=649, max=661614, avg=185485.87, stdev=141999.87 00:26:13.654 clat percentiles (usec): 00:26:13.654 | 1.00th=[ 848], 5.00th=[ 4555], 10.00th=[ 20579], 20.00th=[ 72877], 00:26:13.654 | 30.00th=[ 86508], 40.00th=[109577], 50.00th=[135267], 60.00th=[202376], 00:26:13.654 | 70.00th=[242222], 80.00th=[299893], 90.00th=[408945], 95.00th=[455082], 00:26:13.654 | 99.00th=[541066], 99.50th=[557843], 99.90th=[633340], 99.95th=[658506], 00:26:13.654 | 99.99th=[658506] 00:26:13.654 bw ( KiB/s): min=34304, max=176128, per=8.47%, avg=88243.20, stdev=43285.97, samples=20 00:26:13.654 iops : min= 134, max= 688, avg=344.70, stdev=169.09, samples=20 00:26:13.654 lat (usec) : 750=0.57%, 1000=0.77% 00:26:13.654 lat (msec) : 2=0.68%, 4=2.45%, 10=3.28%, 20=2.14%, 50=5.93% 00:26:13.654 lat (msec) : 100=22.42%, 250=32.82%, 500=26.30%, 750=2.65% 00:26:13.654 cpu : usr=0.70%, sys=1.20%, ctx=1958, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,3510,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job9: (groupid=0, jobs=1): err= 0: pid=1058835: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=475, BW=119MiB/s (125MB/s)(1210MiB/10168msec); 0 zone resets 00:26:13.654 slat (usec): min=20, max=51726, avg=1308.53, stdev=4285.78 00:26:13.654 clat (usec): min=667, max=526269, avg=133137.28, stdev=112292.58 00:26:13.654 lat (usec): min=704, max=532947, avg=134445.82, stdev=113553.13 00:26:13.654 clat percentiles (usec): 00:26:13.654 | 1.00th=[ 1401], 5.00th=[ 3359], 10.00th=[ 5932], 20.00th=[ 17695], 00:26:13.654 | 30.00th=[ 48497], 40.00th=[ 80217], 50.00th=[123208], 60.00th=[145753], 00:26:13.654 | 70.00th=[185598], 80.00th=[223347], 90.00th=[283116], 95.00th=[362808], 00:26:13.654 | 99.00th=[446694], 99.50th=[476054], 99.90th=[517997], 99.95th=[522191], 00:26:13.654 | 99.99th=[526386] 00:26:13.654 bw ( KiB/s): min=39424, max=287744, per=11.74%, avg=122240.00, stdev=64055.46, samples=20 00:26:13.654 iops : min= 154, max= 1124, avg=477.50, stdev=250.22, samples=20 00:26:13.654 lat (usec) : 750=0.02%, 1000=0.27% 00:26:13.654 lat (msec) : 2=2.17%, 4=4.07%, 10=7.30%, 20=7.42%, 50=9.18% 00:26:13.654 lat (msec) : 100=14.37%, 250=41.32%, 500=13.56%, 750=0.33% 00:26:13.654 cpu : usr=0.90%, sys=1.59%, ctx=3100, majf=0, minf=2 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,4838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 job10: (groupid=0, jobs=1): err= 0: pid=1058836: Fri Dec 13 06:32:04 2024 00:26:13.654 write: IOPS=354, BW=88.7MiB/s (93.0MB/s)(896MiB/10105msec); 0 zone resets 00:26:13.654 slat (usec): min=20, max=111285, avg=2214.04, stdev=5504.19 00:26:13.654 clat (usec): min=1022, max=469493, avg=177946.38, stdev=82857.74 00:26:13.654 lat (usec): min=1086, max=469557, avg=180160.42, stdev=83899.18 00:26:13.654 clat percentiles (msec): 00:26:13.654 | 1.00th=[ 3], 5.00th=[ 33], 10.00th=[ 78], 20.00th=[ 123], 00:26:13.654 | 30.00th=[ 140], 40.00th=[ 155], 50.00th=[ 176], 60.00th=[ 190], 00:26:13.654 | 70.00th=[ 209], 80.00th=[ 232], 90.00th=[ 284], 95.00th=[ 334], 00:26:13.654 | 99.00th=[ 439], 99.50th=[ 464], 99.90th=[ 468], 99.95th=[ 468], 00:26:13.654 | 99.99th=[ 468] 00:26:13.654 bw ( KiB/s): min=36864, max=151040, per=8.65%, avg=90137.60, stdev=28939.97, samples=20 00:26:13.654 iops : min= 144, max= 590, avg=352.10, stdev=113.05, samples=20 00:26:13.654 lat (msec) : 2=0.25%, 4=1.51%, 10=0.81%, 20=0.89%, 50=3.10% 00:26:13.654 lat (msec) : 100=6.64%, 250=71.32%, 500=15.49% 00:26:13.654 cpu : usr=0.81%, sys=1.26%, ctx=1617, majf=0, minf=1 00:26:13.654 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:26:13.654 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:13.654 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:13.654 issued rwts: total=0,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:13.654 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:13.654 00:26:13.654 Run status group 0 (all jobs): 00:26:13.654 WRITE: bw=1017MiB/s (1067MB/s), 66.5MiB/s-160MiB/s (69.8MB/s-168MB/s), io=10.1GiB (10.9GB), run=10105-10195msec 00:26:13.654 00:26:13.654 Disk stats (read/write): 00:26:13.654 nvme0n1: ios=49/7568, merge=0/0, ticks=39/1211757, in_queue=1211796, util=97.17% 00:26:13.654 nvme10n1: ios=42/6091, merge=0/0, ticks=1716/1235853, in_queue=1237569, util=99.97% 00:26:13.654 nvme1n1: ios=43/6343, merge=0/0, ticks=1226/1231220, in_queue=1232446, util=99.90% 00:26:13.654 nvme2n1: ios=0/12775, merge=0/0, ticks=0/1208524, in_queue=1208524, util=97.56% 00:26:13.654 nvme3n1: ios=0/8443, merge=0/0, ticks=0/1209235, in_queue=1209235, util=97.65% 00:26:13.654 nvme4n1: ios=48/5232, merge=0/0, ticks=5523/1152239, in_queue=1157762, util=99.92% 00:26:13.654 nvme5n1: ios=46/6139, merge=0/0, ticks=5084/1185762, in_queue=1190846, util=99.93% 00:26:13.654 nvme6n1: ios=43/5648, merge=0/0, ticks=1060/1238557, in_queue=1239617, util=99.92% 00:26:13.654 nvme7n1: ios=43/6996, merge=0/0, ticks=1531/1236315, in_queue=1237846, util=99.99% 00:26:13.654 nvme8n1: ios=0/9665, merge=0/0, ticks=0/1248315, in_queue=1248315, util=98.97% 00:26:13.654 nvme9n1: ios=40/6986, merge=0/0, ticks=2381/1204825, in_queue=1207206, util=99.98% 00:26:13.654 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:13.654 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:13.654 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.654 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:13.654 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:13.654 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:13.654 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:13.655 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:13.913 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:13.914 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.172 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:14.431 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.431 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:14.690 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.690 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:14.950 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:14.950 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:15.209 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:15.209 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.210 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:15.469 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.469 06:32:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:15.469 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.469 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:15.728 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:15.728 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:15.728 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:15.987 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:15.987 rmmod nvme_tcp 00:26:15.987 rmmod nvme_fabrics 00:26:15.987 rmmod nvme_keyring 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 1051255 ']' 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 1051255 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 1051255 ']' 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 1051255 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1051255 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1051255' 00:26:15.987 killing process with pid 1051255 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 1051255 00:26:15.987 06:32:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 1051255 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:16.556 06:32:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.462 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:18.462 00:26:18.462 real 1m10.967s 00:26:18.462 user 4m16.675s 00:26:18.462 sys 0m17.221s 00:26:18.462 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:18.462 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:18.462 ************************************ 00:26:18.462 END TEST nvmf_multiconnection 00:26:18.462 ************************************ 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:18.721 ************************************ 00:26:18.721 START TEST nvmf_initiator_timeout 00:26:18.721 ************************************ 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:18.721 * Looking for test storage... 00:26:18.721 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:18.721 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:18.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.722 --rc genhtml_branch_coverage=1 00:26:18.722 --rc genhtml_function_coverage=1 00:26:18.722 --rc genhtml_legend=1 00:26:18.722 --rc geninfo_all_blocks=1 00:26:18.722 --rc geninfo_unexecuted_blocks=1 00:26:18.722 00:26:18.722 ' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:18.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.722 --rc genhtml_branch_coverage=1 00:26:18.722 --rc genhtml_function_coverage=1 00:26:18.722 --rc genhtml_legend=1 00:26:18.722 --rc geninfo_all_blocks=1 00:26:18.722 --rc geninfo_unexecuted_blocks=1 00:26:18.722 00:26:18.722 ' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:18.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.722 --rc genhtml_branch_coverage=1 00:26:18.722 --rc genhtml_function_coverage=1 00:26:18.722 --rc genhtml_legend=1 00:26:18.722 --rc geninfo_all_blocks=1 00:26:18.722 --rc geninfo_unexecuted_blocks=1 00:26:18.722 00:26:18.722 ' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:18.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:18.722 --rc genhtml_branch_coverage=1 00:26:18.722 --rc genhtml_function_coverage=1 00:26:18.722 --rc genhtml_legend=1 00:26:18.722 --rc geninfo_all_blocks=1 00:26:18.722 --rc geninfo_unexecuted_blocks=1 00:26:18.722 00:26:18.722 ' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:18.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:18.722 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:18.982 06:32:10 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:25.551 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:25.552 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:25.552 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:25.552 Found net devices under 0000:af:00.0: cvl_0_0 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:25.552 Found net devices under 0000:af:00.1: cvl_0_1 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.552 06:32:15 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:25.552 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:25.552 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.552 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:26:25.552 00:26:25.552 --- 10.0.0.2 ping statistics --- 00:26:25.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.553 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.553 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.553 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.240 ms 00:26:25.553 00:26:25.553 --- 10.0.0.1 ping statistics --- 00:26:25.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.553 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=1063931 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 1063931 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 1063931 ']' 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.553 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 [2024-12-13 06:32:16.308166] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:26:25.553 [2024-12-13 06:32:16.308221] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.553 [2024-12-13 06:32:16.389755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:25.553 [2024-12-13 06:32:16.412597] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:25.553 [2024-12-13 06:32:16.412638] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:25.553 [2024-12-13 06:32:16.412645] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:25.553 [2024-12-13 06:32:16.412651] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:25.553 [2024-12-13 06:32:16.412655] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:25.553 [2024-12-13 06:32:16.414096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:25.553 [2024-12-13 06:32:16.414205] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:25.553 [2024-12-13 06:32:16.414292] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.553 [2024-12-13 06:32:16.414294] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 Malloc0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 Delay0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 [2024-12-13 06:32:16.609203] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.553 [2024-12-13 06:32:16.634492] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:25.553 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:26.121 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:26.121 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:26.121 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:26.121 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:26.121 06:32:17 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=1064639 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:28.656 06:32:19 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:28.656 [global] 00:26:28.656 thread=1 00:26:28.656 invalidate=1 00:26:28.656 rw=write 00:26:28.656 time_based=1 00:26:28.656 runtime=60 00:26:28.656 ioengine=libaio 00:26:28.656 direct=1 00:26:28.656 bs=4096 00:26:28.656 iodepth=1 00:26:28.656 norandommap=0 00:26:28.656 numjobs=1 00:26:28.656 00:26:28.656 verify_dump=1 00:26:28.656 verify_backlog=512 00:26:28.656 verify_state_save=0 00:26:28.656 do_verify=1 00:26:28.656 verify=crc32c-intel 00:26:28.656 [job0] 00:26:28.656 filename=/dev/nvme0n1 00:26:28.656 Could not set queue depth (nvme0n1) 00:26:28.656 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:28.656 fio-3.35 00:26:28.656 Starting 1 thread 00:26:31.191 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:31.191 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 true 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 true 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 true 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:31.192 true 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.192 06:32:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 true 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 true 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 true 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:34.479 true 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:34.479 06:32:25 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 1064639 00:27:30.708 00:27:30.708 job0: (groupid=0, jobs=1): err= 0: pid=1064761: Fri Dec 13 06:33:20 2024 00:27:30.708 read: IOPS=64, BW=259KiB/s (265kB/s)(15.2MiB/60019msec) 00:27:30.708 slat (usec): min=6, max=11658, avg=13.51, stdev=205.53 00:27:30.708 clat (usec): min=175, max=41620k, avg=15208.63, stdev=667531.73 00:27:30.708 lat (usec): min=215, max=41620k, avg=15222.14, stdev=667532.14 00:27:30.708 clat percentiles (usec): 00:27:30.708 | 1.00th=[ 217], 5.00th=[ 225], 10.00th=[ 231], 00:27:30.708 | 20.00th=[ 239], 30.00th=[ 245], 40.00th=[ 251], 00:27:30.708 | 50.00th=[ 258], 60.00th=[ 265], 70.00th=[ 277], 00:27:30.708 | 80.00th=[ 293], 90.00th=[ 40633], 95.00th=[ 41157], 00:27:30.708 | 99.00th=[ 41157], 99.50th=[ 41157], 99.90th=[ 41681], 00:27:30.708 | 99.95th=[ 43779], 99.99th=[17112761] 00:27:30.708 write: IOPS=68, BW=273KiB/s (280kB/s)(16.0MiB/60019msec); 0 zone resets 00:27:30.708 slat (usec): min=9, max=27542, avg=17.96, stdev=430.18 00:27:30.708 clat (usec): min=145, max=408, avg=180.17, stdev=14.98 00:27:30.708 lat (usec): min=156, max=27918, avg=198.13, stdev=433.50 00:27:30.708 clat percentiles (usec): 00:27:30.708 | 1.00th=[ 155], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:27:30.708 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:27:30.708 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 202], 00:27:30.708 | 99.00th=[ 221], 99.50th=[ 231], 99.90th=[ 310], 99.95th=[ 347], 00:27:30.708 | 99.99th=[ 408] 00:27:30.708 bw ( KiB/s): min= 496, max= 8400, per=100.00%, avg=5461.33, stdev=3183.72, samples=6 00:27:30.708 iops : min= 124, max= 2100, avg=1365.33, stdev=795.93, samples=6 00:27:30.708 lat (usec) : 250=70.24%, 500=24.65%, 750=0.01% 00:27:30.708 lat (msec) : 2=0.01%, 50=5.07%, >=2000=0.01% 00:27:30.708 cpu : usr=0.08%, sys=0.17%, ctx=7992, majf=0, minf=1 00:27:30.708 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:30.708 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.708 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:30.708 issued rwts: total=3888,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:30.708 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:30.708 00:27:30.708 Run status group 0 (all jobs): 00:27:30.708 READ: bw=259KiB/s (265kB/s), 259KiB/s-259KiB/s (265kB/s-265kB/s), io=15.2MiB (15.9MB), run=60019-60019msec 00:27:30.708 WRITE: bw=273KiB/s (280kB/s), 273KiB/s-273KiB/s (280kB/s-280kB/s), io=16.0MiB (16.8MB), run=60019-60019msec 00:27:30.708 00:27:30.708 Disk stats (read/write): 00:27:30.708 nvme0n1: ios=3933/4096, merge=0/0, ticks=18671/704, in_queue=19375, util=99.90% 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:30.708 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:30.708 nvmf hotplug test: fio successful as expected 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:30.708 rmmod nvme_tcp 00:27:30.708 rmmod nvme_fabrics 00:27:30.708 rmmod nvme_keyring 00:27:30.708 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 1063931 ']' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 1063931 ']' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1063931' 00:27:30.709 killing process with pid 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 1063931 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:30.709 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.276 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:31.276 00:27:31.276 real 1m12.530s 00:27:31.276 user 4m21.738s 00:27:31.276 sys 0m6.489s 00:27:31.276 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:31.277 ************************************ 00:27:31.277 END TEST nvmf_initiator_timeout 00:27:31.277 ************************************ 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:31.277 06:33:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:37.848 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:37.848 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:37.848 Found net devices under 0000:af:00.0: cvl_0_0 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:37.848 Found net devices under 0000:af:00.1: cvl_0_1 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:37.848 ************************************ 00:27:37.848 START TEST nvmf_perf_adq 00:27:37.848 ************************************ 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:37.848 * Looking for test storage... 00:27:37.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:37.848 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:37.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.849 --rc genhtml_branch_coverage=1 00:27:37.849 --rc genhtml_function_coverage=1 00:27:37.849 --rc genhtml_legend=1 00:27:37.849 --rc geninfo_all_blocks=1 00:27:37.849 --rc geninfo_unexecuted_blocks=1 00:27:37.849 00:27:37.849 ' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:37.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.849 --rc genhtml_branch_coverage=1 00:27:37.849 --rc genhtml_function_coverage=1 00:27:37.849 --rc genhtml_legend=1 00:27:37.849 --rc geninfo_all_blocks=1 00:27:37.849 --rc geninfo_unexecuted_blocks=1 00:27:37.849 00:27:37.849 ' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:37.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.849 --rc genhtml_branch_coverage=1 00:27:37.849 --rc genhtml_function_coverage=1 00:27:37.849 --rc genhtml_legend=1 00:27:37.849 --rc geninfo_all_blocks=1 00:27:37.849 --rc geninfo_unexecuted_blocks=1 00:27:37.849 00:27:37.849 ' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:37.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:37.849 --rc genhtml_branch_coverage=1 00:27:37.849 --rc genhtml_function_coverage=1 00:27:37.849 --rc genhtml_legend=1 00:27:37.849 --rc geninfo_all_blocks=1 00:27:37.849 --rc geninfo_unexecuted_blocks=1 00:27:37.849 00:27:37.849 ' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:37.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:37.849 06:33:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:43.124 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:43.125 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:43.125 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:43.125 Found net devices under 0000:af:00.0: cvl_0_0 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:43.125 Found net devices under 0000:af:00.1: cvl_0_1 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:43.125 06:33:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:27:43.691 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:27:46.226 06:33:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:51.500 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.500 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:51.501 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:51.501 Found net devices under 0000:af:00.0: cvl_0_0 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:51.501 Found net devices under 0000:af:00.1: cvl_0_1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:51.501 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:51.501 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.501 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.498 ms 00:27:51.501 00:27:51.501 --- 10.0.0.2 ping statistics --- 00:27:51.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.501 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.501 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.501 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:27:51.501 00:27:51.501 --- 10.0.0.1 ping statistics --- 00:27:51.501 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.501 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1082772 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1082772 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1082772 ']' 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.501 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.501 [2024-12-13 06:33:43.150983] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:27:51.501 [2024-12-13 06:33:43.151030] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.761 [2024-12-13 06:33:43.231112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:51.761 [2024-12-13 06:33:43.253962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.761 [2024-12-13 06:33:43.253998] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.761 [2024-12-13 06:33:43.254005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.761 [2024-12-13 06:33:43.254011] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.761 [2024-12-13 06:33:43.254017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.761 [2024-12-13 06:33:43.255482] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.761 [2024-12-13 06:33:43.255537] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.761 [2024-12-13 06:33:43.255644] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:51.761 [2024-12-13 06:33:43.255645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.761 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 [2024-12-13 06:33:43.467873] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 Malloc1 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:52.021 [2024-12-13 06:33:43.525969] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1082909 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:27:52.021 06:33:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:27:53.927 "tick_rate": 2100000000, 00:27:53.927 "poll_groups": [ 00:27:53.927 { 00:27:53.927 "name": "nvmf_tgt_poll_group_000", 00:27:53.927 "admin_qpairs": 1, 00:27:53.927 "io_qpairs": 1, 00:27:53.927 "current_admin_qpairs": 1, 00:27:53.927 "current_io_qpairs": 1, 00:27:53.927 "pending_bdev_io": 0, 00:27:53.927 "completed_nvme_io": 19229, 00:27:53.927 "transports": [ 00:27:53.927 { 00:27:53.927 "trtype": "TCP" 00:27:53.927 } 00:27:53.927 ] 00:27:53.927 }, 00:27:53.927 { 00:27:53.927 "name": "nvmf_tgt_poll_group_001", 00:27:53.927 "admin_qpairs": 0, 00:27:53.927 "io_qpairs": 1, 00:27:53.927 "current_admin_qpairs": 0, 00:27:53.927 "current_io_qpairs": 1, 00:27:53.927 "pending_bdev_io": 0, 00:27:53.927 "completed_nvme_io": 19602, 00:27:53.927 "transports": [ 00:27:53.927 { 00:27:53.927 "trtype": "TCP" 00:27:53.927 } 00:27:53.927 ] 00:27:53.927 }, 00:27:53.927 { 00:27:53.927 "name": "nvmf_tgt_poll_group_002", 00:27:53.927 "admin_qpairs": 0, 00:27:53.927 "io_qpairs": 1, 00:27:53.927 "current_admin_qpairs": 0, 00:27:53.927 "current_io_qpairs": 1, 00:27:53.927 "pending_bdev_io": 0, 00:27:53.927 "completed_nvme_io": 20045, 00:27:53.927 "transports": [ 00:27:53.927 { 00:27:53.927 "trtype": "TCP" 00:27:53.927 } 00:27:53.927 ] 00:27:53.927 }, 00:27:53.927 { 00:27:53.927 "name": "nvmf_tgt_poll_group_003", 00:27:53.927 "admin_qpairs": 0, 00:27:53.927 "io_qpairs": 1, 00:27:53.927 "current_admin_qpairs": 0, 00:27:53.927 "current_io_qpairs": 1, 00:27:53.927 "pending_bdev_io": 0, 00:27:53.927 "completed_nvme_io": 19479, 00:27:53.927 "transports": [ 00:27:53.927 { 00:27:53.927 "trtype": "TCP" 00:27:53.927 } 00:27:53.927 ] 00:27:53.927 } 00:27:53.927 ] 00:27:53.927 }' 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:53.927 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:27:54.186 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:27:54.186 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:27:54.186 06:33:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1082909 00:28:02.307 Initializing NVMe Controllers 00:28:02.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:02.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:02.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:02.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:02.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:02.307 Initialization complete. Launching workers. 00:28:02.307 ======================================================== 00:28:02.307 Latency(us) 00:28:02.307 Device Information : IOPS MiB/s Average min max 00:28:02.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10262.78 40.09 6238.47 1916.52 11058.05 00:28:02.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10413.58 40.68 6147.81 2006.66 10235.49 00:28:02.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10603.68 41.42 6037.73 1909.39 10274.58 00:28:02.307 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10310.08 40.27 6206.86 2311.15 10598.60 00:28:02.307 ======================================================== 00:28:02.307 Total : 41590.13 162.46 6156.76 1909.39 11058.05 00:28:02.307 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:02.307 rmmod nvme_tcp 00:28:02.307 rmmod nvme_fabrics 00:28:02.307 rmmod nvme_keyring 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1082772 ']' 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1082772 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1082772 ']' 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1082772 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1082772 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1082772' 00:28:02.307 killing process with pid 1082772 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1082772 00:28:02.307 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1082772 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:02.566 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:02.566 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:02.566 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.566 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:02.566 06:33:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:04.471 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:04.471 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:04.471 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:04.471 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:05.849 06:33:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:08.382 06:33:59 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:13.656 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:13.656 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:13.656 Found net devices under 0000:af:00.0: cvl_0_0 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:13.656 Found net devices under 0000:af:00.1: cvl_0_1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:13.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:13.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.570 ms 00:28:13.656 00:28:13.656 --- 10.0.0.2 ping statistics --- 00:28:13.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.656 rtt min/avg/max/mdev = 0.570/0.570/0.570/0.000 ms 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:13.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:13.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:13.656 00:28:13.656 --- 10.0.0.1 ping statistics --- 00:28:13.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:13.656 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:13.656 net.core.busy_poll = 1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:13.656 net.core.busy_read = 1 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:13.656 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=1086811 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 1086811 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1086811 ']' 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:13.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:13.656 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.656 [2024-12-13 06:34:05.277981] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:13.656 [2024-12-13 06:34:05.278025] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:13.915 [2024-12-13 06:34:05.355799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:13.915 [2024-12-13 06:34:05.378855] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:13.915 [2024-12-13 06:34:05.378894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:13.915 [2024-12-13 06:34:05.378902] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:13.915 [2024-12-13 06:34:05.378908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:13.915 [2024-12-13 06:34:05.378913] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:13.915 [2024-12-13 06:34:05.380331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:13.915 [2024-12-13 06:34:05.380440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:13.915 [2024-12-13 06:34:05.380552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:13.915 [2024-12-13 06:34:05.380552] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:13.915 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 [2024-12-13 06:34:05.597337] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 Malloc1 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:14.174 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:14.174 [2024-12-13 06:34:05.663284] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:14.175 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:14.175 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1086840 00:28:14.175 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:14.175 06:34:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:16.080 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:16.080 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.080 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:16.080 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.080 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:16.080 "tick_rate": 2100000000, 00:28:16.080 "poll_groups": [ 00:28:16.080 { 00:28:16.080 "name": "nvmf_tgt_poll_group_000", 00:28:16.080 "admin_qpairs": 1, 00:28:16.081 "io_qpairs": 0, 00:28:16.081 "current_admin_qpairs": 1, 00:28:16.081 "current_io_qpairs": 0, 00:28:16.081 "pending_bdev_io": 0, 00:28:16.081 "completed_nvme_io": 0, 00:28:16.081 "transports": [ 00:28:16.081 { 00:28:16.081 "trtype": "TCP" 00:28:16.081 } 00:28:16.081 ] 00:28:16.081 }, 00:28:16.081 { 00:28:16.081 "name": "nvmf_tgt_poll_group_001", 00:28:16.081 "admin_qpairs": 0, 00:28:16.081 "io_qpairs": 4, 00:28:16.081 "current_admin_qpairs": 0, 00:28:16.081 "current_io_qpairs": 4, 00:28:16.081 "pending_bdev_io": 0, 00:28:16.081 "completed_nvme_io": 45271, 00:28:16.081 "transports": [ 00:28:16.081 { 00:28:16.081 "trtype": "TCP" 00:28:16.081 } 00:28:16.081 ] 00:28:16.081 }, 00:28:16.081 { 00:28:16.081 "name": "nvmf_tgt_poll_group_002", 00:28:16.081 "admin_qpairs": 0, 00:28:16.081 "io_qpairs": 0, 00:28:16.081 "current_admin_qpairs": 0, 00:28:16.081 "current_io_qpairs": 0, 00:28:16.081 "pending_bdev_io": 0, 00:28:16.081 "completed_nvme_io": 0, 00:28:16.081 "transports": [ 00:28:16.081 { 00:28:16.081 "trtype": "TCP" 00:28:16.081 } 00:28:16.081 ] 00:28:16.081 }, 00:28:16.081 { 00:28:16.081 "name": "nvmf_tgt_poll_group_003", 00:28:16.081 "admin_qpairs": 0, 00:28:16.081 "io_qpairs": 0, 00:28:16.081 "current_admin_qpairs": 0, 00:28:16.081 "current_io_qpairs": 0, 00:28:16.081 "pending_bdev_io": 0, 00:28:16.081 "completed_nvme_io": 0, 00:28:16.081 "transports": [ 00:28:16.081 { 00:28:16.081 "trtype": "TCP" 00:28:16.081 } 00:28:16.081 ] 00:28:16.081 } 00:28:16.081 ] 00:28:16.081 }' 00:28:16.081 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:16.081 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:16.340 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:28:16.340 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:28:16.340 06:34:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1086840 00:28:24.460 Initializing NVMe Controllers 00:28:24.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:24.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:24.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:24.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:24.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:24.460 Initialization complete. Launching workers. 00:28:24.460 ======================================================== 00:28:24.460 Latency(us) 00:28:24.460 Device Information : IOPS MiB/s Average min max 00:28:24.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 6058.20 23.66 10579.66 1425.50 55531.87 00:28:24.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5674.70 22.17 11294.50 1210.00 56325.62 00:28:24.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6138.20 23.98 10427.82 1419.29 55517.67 00:28:24.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5480.70 21.41 11713.65 1452.66 56790.57 00:28:24.460 ======================================================== 00:28:24.460 Total : 23351.80 91.22 10979.61 1210.00 56790.57 00:28:24.460 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:24.460 rmmod nvme_tcp 00:28:24.460 rmmod nvme_fabrics 00:28:24.460 rmmod nvme_keyring 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 1086811 ']' 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 1086811 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1086811 ']' 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1086811 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1086811 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1086811' 00:28:24.460 killing process with pid 1086811 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1086811 00:28:24.460 06:34:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1086811 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:24.719 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:28.014 00:28:28.014 real 0m50.909s 00:28:28.014 user 2m43.464s 00:28:28.014 sys 0m10.539s 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:28.014 ************************************ 00:28:28.014 END TEST nvmf_perf_adq 00:28:28.014 ************************************ 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:28.014 ************************************ 00:28:28.014 START TEST nvmf_shutdown 00:28:28.014 ************************************ 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:28.014 * Looking for test storage... 00:28:28.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:28.014 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:28.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.015 --rc genhtml_branch_coverage=1 00:28:28.015 --rc genhtml_function_coverage=1 00:28:28.015 --rc genhtml_legend=1 00:28:28.015 --rc geninfo_all_blocks=1 00:28:28.015 --rc geninfo_unexecuted_blocks=1 00:28:28.015 00:28:28.015 ' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:28.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.015 --rc genhtml_branch_coverage=1 00:28:28.015 --rc genhtml_function_coverage=1 00:28:28.015 --rc genhtml_legend=1 00:28:28.015 --rc geninfo_all_blocks=1 00:28:28.015 --rc geninfo_unexecuted_blocks=1 00:28:28.015 00:28:28.015 ' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:28.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.015 --rc genhtml_branch_coverage=1 00:28:28.015 --rc genhtml_function_coverage=1 00:28:28.015 --rc genhtml_legend=1 00:28:28.015 --rc geninfo_all_blocks=1 00:28:28.015 --rc geninfo_unexecuted_blocks=1 00:28:28.015 00:28:28.015 ' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:28.015 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:28.015 --rc genhtml_branch_coverage=1 00:28:28.015 --rc genhtml_function_coverage=1 00:28:28.015 --rc genhtml_legend=1 00:28:28.015 --rc geninfo_all_blocks=1 00:28:28.015 --rc geninfo_unexecuted_blocks=1 00:28:28.015 00:28:28.015 ' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:28.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:28.015 ************************************ 00:28:28.015 START TEST nvmf_shutdown_tc1 00:28:28.015 ************************************ 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:28.015 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:34.586 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:34.586 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:34.586 Found net devices under 0000:af:00.0: cvl_0_0 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:34.586 Found net devices under 0000:af:00.1: cvl_0_1 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:34.586 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:34.587 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:34.587 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.327 ms 00:28:34.587 00:28:34.587 --- 10.0.0.2 ping statistics --- 00:28:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.587 rtt min/avg/max/mdev = 0.327/0.327/0.327/0.000 ms 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:34.587 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:34.587 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:28:34.587 00:28:34.587 --- 10.0.0.1 ping statistics --- 00:28:34.587 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:34.587 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=1092175 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 1092175 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092175 ']' 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:34.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 [2024-12-13 06:34:25.629512] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:34.587 [2024-12-13 06:34:25.629559] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:34.587 [2024-12-13 06:34:25.709306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:34.587 [2024-12-13 06:34:25.731054] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:34.587 [2024-12-13 06:34:25.731095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:34.587 [2024-12-13 06:34:25.731102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:34.587 [2024-12-13 06:34:25.731107] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:34.587 [2024-12-13 06:34:25.731112] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:34.587 [2024-12-13 06:34:25.732577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:34.587 [2024-12-13 06:34:25.732683] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:34.587 [2024-12-13 06:34:25.732768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:34.587 [2024-12-13 06:34:25.732768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 [2024-12-13 06:34:25.872714] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.587 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.587 Malloc1 00:28:34.587 [2024-12-13 06:34:25.988291] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:34.587 Malloc2 00:28:34.587 Malloc3 00:28:34.587 Malloc4 00:28:34.587 Malloc5 00:28:34.587 Malloc6 00:28:34.587 Malloc7 00:28:34.847 Malloc8 00:28:34.847 Malloc9 00:28:34.847 Malloc10 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1092385 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1092385 /var/tmp/bdevperf.sock 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1092385 ']' 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:34.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.847 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.847 { 00:28:34.847 "params": { 00:28:34.847 "name": "Nvme$subsystem", 00:28:34.847 "trtype": "$TEST_TRANSPORT", 00:28:34.847 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.847 "adrfam": "ipv4", 00:28:34.847 "trsvcid": "$NVMF_PORT", 00:28:34.847 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.847 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.847 "hdgst": ${hdgst:-false}, 00:28:34.847 "ddgst": ${ddgst:-false} 00:28:34.847 }, 00:28:34.847 "method": "bdev_nvme_attach_controller" 00:28:34.847 } 00:28:34.847 EOF 00:28:34.847 )") 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.848 { 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme$subsystem", 00:28:34.848 "trtype": "$TEST_TRANSPORT", 00:28:34.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "$NVMF_PORT", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.848 "hdgst": ${hdgst:-false}, 00:28:34.848 "ddgst": ${ddgst:-false} 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 } 00:28:34.848 EOF 00:28:34.848 )") 00:28:34.848 [2024-12-13 06:34:26.458954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:34.848 [2024-12-13 06:34:26.459006] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.848 { 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme$subsystem", 00:28:34.848 "trtype": "$TEST_TRANSPORT", 00:28:34.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "$NVMF_PORT", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.848 "hdgst": ${hdgst:-false}, 00:28:34.848 "ddgst": ${ddgst:-false} 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 } 00:28:34.848 EOF 00:28:34.848 )") 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.848 { 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme$subsystem", 00:28:34.848 "trtype": "$TEST_TRANSPORT", 00:28:34.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "$NVMF_PORT", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.848 "hdgst": ${hdgst:-false}, 00:28:34.848 "ddgst": ${ddgst:-false} 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 } 00:28:34.848 EOF 00:28:34.848 )") 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:34.848 { 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme$subsystem", 00:28:34.848 "trtype": "$TEST_TRANSPORT", 00:28:34.848 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "$NVMF_PORT", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:34.848 "hdgst": ${hdgst:-false}, 00:28:34.848 "ddgst": ${ddgst:-false} 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 } 00:28:34.848 EOF 00:28:34.848 )") 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:34.848 06:34:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme1", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme2", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme3", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme4", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme5", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme6", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme7", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme8", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme9", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 },{ 00:28:34.848 "params": { 00:28:34.848 "name": "Nvme10", 00:28:34.848 "trtype": "tcp", 00:28:34.848 "traddr": "10.0.0.2", 00:28:34.848 "adrfam": "ipv4", 00:28:34.848 "trsvcid": "4420", 00:28:34.848 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:34.848 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:34.848 "hdgst": false, 00:28:34.848 "ddgst": false 00:28:34.848 }, 00:28:34.848 "method": "bdev_nvme_attach_controller" 00:28:34.848 }' 00:28:35.107 [2024-12-13 06:34:26.540327] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:35.107 [2024-12-13 06:34:26.562529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1092385 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:37.011 06:34:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:37.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1092385 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1092175 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 [2024-12-13 06:34:29.394678] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:37.949 [2024-12-13 06:34:29.394723] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1092918 ] 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.949 )") 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:37.949 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:37.949 { 00:28:37.949 "params": { 00:28:37.949 "name": "Nvme$subsystem", 00:28:37.949 "trtype": "$TEST_TRANSPORT", 00:28:37.949 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:37.949 "adrfam": "ipv4", 00:28:37.949 "trsvcid": "$NVMF_PORT", 00:28:37.949 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:37.949 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:37.949 "hdgst": ${hdgst:-false}, 00:28:37.949 "ddgst": ${ddgst:-false} 00:28:37.949 }, 00:28:37.949 "method": "bdev_nvme_attach_controller" 00:28:37.949 } 00:28:37.949 EOF 00:28:37.950 )") 00:28:37.950 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:37.950 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:37.950 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:37.950 06:34:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme1", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme2", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme3", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme4", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme5", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme6", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme7", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme8", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme9", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 },{ 00:28:37.950 "params": { 00:28:37.950 "name": "Nvme10", 00:28:37.950 "trtype": "tcp", 00:28:37.950 "traddr": "10.0.0.2", 00:28:37.950 "adrfam": "ipv4", 00:28:37.950 "trsvcid": "4420", 00:28:37.950 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:37.950 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:37.950 "hdgst": false, 00:28:37.950 "ddgst": false 00:28:37.950 }, 00:28:37.950 "method": "bdev_nvme_attach_controller" 00:28:37.950 }' 00:28:37.950 [2024-12-13 06:34:29.470311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.950 [2024-12-13 06:34:29.492488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.327 Running I/O for 1 seconds... 00:28:40.339 2262.00 IOPS, 141.38 MiB/s 00:28:40.339 Latency(us) 00:28:40.339 [2024-12-13T05:34:31.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.339 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme1n1 : 1.16 276.87 17.30 0.00 0.00 228619.41 15478.98 217704.35 00:28:40.339 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme2n1 : 1.16 276.02 17.25 0.00 0.00 225149.32 11234.74 209715.20 00:28:40.339 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme3n1 : 1.15 277.10 17.32 0.00 0.00 222910.27 15853.47 216705.71 00:28:40.339 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme4n1 : 1.12 295.74 18.48 0.00 0.00 204862.98 3900.95 208716.56 00:28:40.339 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme5n1 : 1.17 274.59 17.16 0.00 0.00 218817.29 18599.74 227690.79 00:28:40.339 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme6n1 : 1.15 278.89 17.43 0.00 0.00 212047.19 16352.79 203723.34 00:28:40.339 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme7n1 : 1.17 272.98 17.06 0.00 0.00 213977.38 15291.73 224694.86 00:28:40.339 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme8n1 : 1.16 275.11 17.19 0.00 0.00 209129.86 15291.73 221698.93 00:28:40.339 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.339 Verification LBA range: start 0x0 length 0x400 00:28:40.339 Nvme9n1 : 1.18 272.24 17.01 0.00 0.00 208567.64 15229.32 227690.79 00:28:40.340 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:40.340 Verification LBA range: start 0x0 length 0x400 00:28:40.340 Nvme10n1 : 1.17 283.02 17.69 0.00 0.00 197042.17 2481.01 234681.30 00:28:40.340 [2024-12-13T05:34:31.994Z] =================================================================================================================== 00:28:40.340 [2024-12-13T05:34:31.994Z] Total : 2782.55 173.91 0.00 0.00 214022.50 2481.01 234681.30 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.629 rmmod nvme_tcp 00:28:40.629 rmmod nvme_fabrics 00:28:40.629 rmmod nvme_keyring 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 1092175 ']' 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 1092175 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1092175 ']' 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1092175 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1092175 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1092175' 00:28:40.629 killing process with pid 1092175 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1092175 00:28:40.629 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1092175 00:28:40.923 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.924 06:34:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:43.530 00:28:43.530 real 0m15.071s 00:28:43.530 user 0m32.885s 00:28:43.530 sys 0m5.736s 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:43.530 ************************************ 00:28:43.530 END TEST nvmf_shutdown_tc1 00:28:43.530 ************************************ 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:43.530 ************************************ 00:28:43.530 START TEST nvmf_shutdown_tc2 00:28:43.530 ************************************ 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:43.530 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:43.531 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:43.531 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:43.531 Found net devices under 0000:af:00.0: cvl_0_0 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:43.531 Found net devices under 0000:af:00.1: cvl_0_1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:43.531 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:43.531 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:28:43.531 00:28:43.531 --- 10.0.0.2 ping statistics --- 00:28:43.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.531 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:43.531 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:43.531 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:28:43.531 00:28:43.531 --- 10.0.0.1 ping statistics --- 00:28:43.531 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:43.531 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.531 06:34:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1093916 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1093916 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093916 ']' 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:43.531 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:43.531 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:43.532 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:43.532 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.532 [2024-12-13 06:34:35.058300] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:43.532 [2024-12-13 06:34:35.058343] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:43.532 [2024-12-13 06:34:35.135858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:43.532 [2024-12-13 06:34:35.157658] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:43.532 [2024-12-13 06:34:35.157694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:43.532 [2024-12-13 06:34:35.157702] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:43.532 [2024-12-13 06:34:35.157708] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:43.532 [2024-12-13 06:34:35.157712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:43.532 [2024-12-13 06:34:35.159222] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:43.532 [2024-12-13 06:34:35.159305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:43.532 [2024-12-13 06:34:35.159418] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:43.532 [2024-12-13 06:34:35.159420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.791 [2024-12-13 06:34:35.299199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.791 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.791 Malloc1 00:28:43.791 [2024-12-13 06:34:35.411124] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.791 Malloc2 00:28:44.049 Malloc3 00:28:44.049 Malloc4 00:28:44.049 Malloc5 00:28:44.049 Malloc6 00:28:44.049 Malloc7 00:28:44.049 Malloc8 00:28:44.309 Malloc9 00:28:44.309 Malloc10 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1093985 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1093985 /var/tmp/bdevperf.sock 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1093985 ']' 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:44.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.309 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.309 { 00:28:44.309 "params": { 00:28:44.309 "name": "Nvme$subsystem", 00:28:44.309 "trtype": "$TEST_TRANSPORT", 00:28:44.309 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.309 "adrfam": "ipv4", 00:28:44.309 "trsvcid": "$NVMF_PORT", 00:28:44.309 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.309 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 [2024-12-13 06:34:35.881056] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:44.310 [2024-12-13 06:34:35.881106] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1093985 ] 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:44.310 { 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme$subsystem", 00:28:44.310 "trtype": "$TEST_TRANSPORT", 00:28:44.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "$NVMF_PORT", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:44.310 "hdgst": ${hdgst:-false}, 00:28:44.310 "ddgst": ${ddgst:-false} 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 } 00:28:44.310 EOF 00:28:44.310 )") 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:28:44.310 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme1", 00:28:44.310 "trtype": "tcp", 00:28:44.310 "traddr": "10.0.0.2", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "4420", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:44.310 "hdgst": false, 00:28:44.310 "ddgst": false 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 },{ 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme2", 00:28:44.310 "trtype": "tcp", 00:28:44.310 "traddr": "10.0.0.2", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "4420", 00:28:44.310 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:44.310 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:44.310 "hdgst": false, 00:28:44.310 "ddgst": false 00:28:44.310 }, 00:28:44.310 "method": "bdev_nvme_attach_controller" 00:28:44.310 },{ 00:28:44.310 "params": { 00:28:44.310 "name": "Nvme3", 00:28:44.310 "trtype": "tcp", 00:28:44.310 "traddr": "10.0.0.2", 00:28:44.310 "adrfam": "ipv4", 00:28:44.310 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme4", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme5", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme6", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme7", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme8", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme9", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 },{ 00:28:44.311 "params": { 00:28:44.311 "name": "Nvme10", 00:28:44.311 "trtype": "tcp", 00:28:44.311 "traddr": "10.0.0.2", 00:28:44.311 "adrfam": "ipv4", 00:28:44.311 "trsvcid": "4420", 00:28:44.311 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:44.311 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:44.311 "hdgst": false, 00:28:44.311 "ddgst": false 00:28:44.311 }, 00:28:44.311 "method": "bdev_nvme_attach_controller" 00:28:44.311 }' 00:28:44.311 [2024-12-13 06:34:35.957721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:44.570 [2024-12-13 06:34:35.980016] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:45.949 Running I/O for 10 seconds... 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:46.208 06:34:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.467 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=136 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 136 -ge 100 ']' 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1093985 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093985 ']' 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093985 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093985 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093985' 00:28:46.727 killing process with pid 1093985 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093985 00:28:46.727 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093985 00:28:46.727 Received shutdown signal, test time was about 0.841386 seconds 00:28:46.727 00:28:46.727 Latency(us) 00:28:46.727 [2024-12-13T05:34:38.381Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:46.727 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme1n1 : 0.83 313.94 19.62 0.00 0.00 200758.72 1888.06 207717.91 00:28:46.727 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme2n1 : 0.83 306.64 19.17 0.00 0.00 202413.35 14854.83 215707.06 00:28:46.727 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme3n1 : 0.83 310.24 19.39 0.00 0.00 196148.66 14480.34 215707.06 00:28:46.727 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme4n1 : 0.84 304.50 19.03 0.00 0.00 196147.20 13793.77 218702.99 00:28:46.727 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme5n1 : 0.81 236.42 14.78 0.00 0.00 247021.31 15978.30 219701.64 00:28:46.727 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme6n1 : 0.81 237.19 14.82 0.00 0.00 240883.81 27213.04 201726.05 00:28:46.727 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme7n1 : 0.84 305.14 19.07 0.00 0.00 184112.52 14417.92 214708.42 00:28:46.727 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme8n1 : 0.83 315.60 19.72 0.00 0.00 172783.73 4805.97 201726.05 00:28:46.727 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme9n1 : 0.82 234.85 14.68 0.00 0.00 228047.40 18849.40 225693.50 00:28:46.727 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:46.727 Verification LBA range: start 0x0 length 0x400 00:28:46.727 Nvme10n1 : 0.82 234.09 14.63 0.00 0.00 223829.82 17725.93 231685.36 00:28:46.727 [2024-12-13T05:34:38.381Z] =================================================================================================================== 00:28:46.727 [2024-12-13T05:34:38.381Z] Total : 2798.61 174.91 0.00 0.00 206271.02 1888.06 231685.36 00:28:46.986 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1093916 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.924 rmmod nvme_tcp 00:28:47.924 rmmod nvme_fabrics 00:28:47.924 rmmod nvme_keyring 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 1093916 ']' 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 1093916 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1093916 ']' 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1093916 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:47.924 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1093916 00:28:48.183 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:48.183 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:48.183 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1093916' 00:28:48.183 killing process with pid 1093916 00:28:48.183 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1093916 00:28:48.183 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1093916 00:28:48.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:48.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:48.442 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:50.977 00:28:50.977 real 0m7.316s 00:28:50.977 user 0m21.541s 00:28:50.977 sys 0m1.344s 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:50.977 ************************************ 00:28:50.977 END TEST nvmf_shutdown_tc2 00:28:50.977 ************************************ 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:50.977 ************************************ 00:28:50.977 START TEST nvmf_shutdown_tc3 00:28:50.977 ************************************ 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.977 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.977 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.977 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.978 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.978 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:28:50.978 00:28:50.978 --- 10.0.0.2 ping statistics --- 00:28:50.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.978 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:28:50.978 00:28:50.978 --- 10.0.0.1 ping statistics --- 00:28:50.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.978 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=1095227 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 1095227 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095227 ']' 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.978 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.978 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:50.978 [2024-12-13 06:34:42.478626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:50.978 [2024-12-13 06:34:42.478669] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.978 [2024-12-13 06:34:42.558325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.978 [2024-12-13 06:34:42.580546] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.978 [2024-12-13 06:34:42.580582] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.978 [2024-12-13 06:34:42.580593] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.978 [2024-12-13 06:34:42.580599] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.978 [2024-12-13 06:34:42.580604] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.978 [2024-12-13 06:34:42.582080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.978 [2024-12-13 06:34:42.582186] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.978 [2024-12-13 06:34:42.582311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.978 [2024-12-13 06:34:42.582312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.238 [2024-12-13 06:34:42.713167] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.238 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.238 Malloc1 00:28:51.238 [2024-12-13 06:34:42.818357] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.238 Malloc2 00:28:51.238 Malloc3 00:28:51.498 Malloc4 00:28:51.498 Malloc5 00:28:51.498 Malloc6 00:28:51.498 Malloc7 00:28:51.498 Malloc8 00:28:51.498 Malloc9 00:28:51.757 Malloc10 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1095332 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1095332 /var/tmp/bdevperf.sock 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1095332 ']' 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 [2024-12-13 06:34:43.281841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:51.758 [2024-12-13 06:34:43.281889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1095332 ] 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.758 "ddgst": ${ddgst:-false} 00:28:51.758 }, 00:28:51.758 "method": "bdev_nvme_attach_controller" 00:28:51.758 } 00:28:51.758 EOF 00:28:51.758 )") 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.758 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.758 { 00:28:51.758 "params": { 00:28:51.758 "name": "Nvme$subsystem", 00:28:51.758 "trtype": "$TEST_TRANSPORT", 00:28:51.758 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.758 "adrfam": "ipv4", 00:28:51.758 "trsvcid": "$NVMF_PORT", 00:28:51.758 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.758 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.758 "hdgst": ${hdgst:-false}, 00:28:51.759 "ddgst": ${ddgst:-false} 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 } 00:28:51.759 EOF 00:28:51.759 )") 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.759 { 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme$subsystem", 00:28:51.759 "trtype": "$TEST_TRANSPORT", 00:28:51.759 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "$NVMF_PORT", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.759 "hdgst": ${hdgst:-false}, 00:28:51.759 "ddgst": ${ddgst:-false} 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 } 00:28:51.759 EOF 00:28:51.759 )") 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:28:51.759 06:34:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme1", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme2", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme3", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme4", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme5", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme6", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme7", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme8", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme9", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 },{ 00:28:51.759 "params": { 00:28:51.759 "name": "Nvme10", 00:28:51.759 "trtype": "tcp", 00:28:51.759 "traddr": "10.0.0.2", 00:28:51.759 "adrfam": "ipv4", 00:28:51.759 "trsvcid": "4420", 00:28:51.759 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:51.759 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:51.759 "hdgst": false, 00:28:51.759 "ddgst": false 00:28:51.759 }, 00:28:51.759 "method": "bdev_nvme_attach_controller" 00:28:51.759 }' 00:28:51.759 [2024-12-13 06:34:43.360190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.759 [2024-12-13 06:34:43.382465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.673 Running I/O for 10 seconds... 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:28:53.673 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1095227 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095227 ']' 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095227 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:53.933 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1095227 00:28:54.208 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:54.208 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:54.208 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1095227' 00:28:54.208 killing process with pid 1095227 00:28:54.208 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1095227 00:28:54.208 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1095227 00:28:54.208 [2024-12-13 06:34:45.607028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607211] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607231] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607271] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607291] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607298] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607312] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607394] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607433] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607483] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607489] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.607501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6fef00 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608667] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608673] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608693] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608700] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.208 [2024-12-13 06:34:45.608734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608757] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608764] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608775] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608795] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608801] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608807] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608826] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.608999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609006] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609029] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609041] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609047] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609059] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609065] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609071] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.609082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701980 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610223] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610243] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610265] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610330] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610336] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610342] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610349] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610356] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.209 [2024-12-13 06:34:45.610444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610456] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610470] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610477] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610497] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610552] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610590] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.610622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff3f0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611891] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611898] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611904] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611911] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611923] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611929] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611935] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611992] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.611999] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612005] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612018] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612082] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612089] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612101] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612137] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612143] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612167] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.210 [2024-12-13 06:34:45.612190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.612197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.612203] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ff8c0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613206] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613221] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613227] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613272] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613278] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613304] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613310] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613332] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613338] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613344] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613369] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613375] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613381] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613403] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613409] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613415] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613421] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613473] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.613550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6ffdb0 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614789] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614806] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614838] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614850] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614855] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614861] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614878] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614884] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.211 [2024-12-13 06:34:45.614889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614900] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614918] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614937] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614943] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614949] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614955] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.614998] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615004] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700280 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615155] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7140 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a28030 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615394] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615458] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fab80 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d4bc0 is same with the state(6) to be set 00:28:54.212 [2024-12-13 06:34:45.615561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.212 [2024-12-13 06:34:45.615595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.212 [2024-12-13 06:34:45.615602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.213 [2024-12-13 06:34:45.615615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5440 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.615857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615865] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.615879] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.615895] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615906] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:12[2024-12-13 06:34:45.615913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 06:34:45.615925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.615941] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t[2024-12-13 06:34:45.615942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:54.213 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.615965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.615972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-12-13 06:34:45.615980] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.615992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 06:34:45.615993] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616003] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616010] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t[2024-12-13 06:34:45.616012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:54.213 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616021] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616028] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616035] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-12-13 06:34:45.616043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616053] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616063] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 06:34:45.616070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:1[2024-12-13 06:34:45.616118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 he state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616134] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t[2024-12-13 06:34:45.616135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:1he state(6) to be set 00:28:54.213 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t[2024-12-13 06:34:45.616145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:54.213 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616155] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616188] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 [2024-12-13 06:34:45.616204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.213 [2024-12-13 06:34:45.616212] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.213 [2024-12-13 06:34:45.616217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 06:34:45.616219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.213 he state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616229] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t[2024-12-13 06:34:45.616239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:28:54.214 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616261] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616297] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616327] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:1[2024-12-13 06:34:45.616334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 he state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-13 06:34:45.616346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 he state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700770 is same with the state(6) to be set 00:28:54.214 [2024-12-13 06:34:45.616382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.214 [2024-12-13 06:34:45.616705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.214 [2024-12-13 06:34:45.616711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.616911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-12-13 06:34:45.616917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.215 [2024-12-13 06:34:45.617485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617511] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617656] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617662] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617704] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617746] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617802] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617808] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617814] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617820] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617857] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.215 [2024-12-13 06:34:45.617863] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.617869] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.617876] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.617882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.617888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.618565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:54.216 [2024-12-13 06:34:45.618603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fab80 (9): Bad file descriptor 00:28:54.216 [2024-12-13 06:34:45.620071] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.216 [2024-12-13 06:34:45.620199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.216 [2024-12-13 06:34:45.620216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fab80 with addr=10.0.0.2, port=4420 00:28:54.216 [2024-12-13 06:34:45.620224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fab80 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.620277] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.216 [2024-12-13 06:34:45.620320] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.216 [2024-12-13 06:34:45.620359] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.216 [2024-12-13 06:34:45.620394] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.216 [2024-12-13 06:34:45.620526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19dde30 is same with the state(6) to be set 00:28:54.216 [2024-12-13 06:34:45.620839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fab80 (9): Bad file descriptor 00:28:54.216 [2024-12-13 06:34:45.620886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.620990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.620998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.216 [2024-12-13 06:34:45.621224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.216 [2024-12-13 06:34:45.621231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.621412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.217 [2024-12-13 06:34:45.621420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.217 [2024-12-13 06:34:45.628663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.628672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700af0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629779] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629793] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629836] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629842] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629848] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629871] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.217 [2024-12-13 06:34:45.629888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.629894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x700fc0 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630422] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630518] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630531] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630543] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630549] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630580] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630638] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630657] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630670] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630754] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630761] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630767] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630796] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.630828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x701490 is same with the state(6) to be set 00:28:54.218 [2024-12-13 06:34:45.632896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.632914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.632923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.632934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.632943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.632954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.632963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.632973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.632982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.632993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.218 [2024-12-13 06:34:45.633102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.218 [2024-12-13 06:34:45.633116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.219 [2024-12-13 06:34:45.633442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.633457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19db8b0 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.634629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:54.219 [2024-12-13 06:34:45.634686] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01060 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.634701] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:54.219 [2024-12-13 06:34:45.634710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:54.219 [2024-12-13 06:34:45.634720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:54.219 [2024-12-13 06:34:45.634730] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:54.219 [2024-12-13 06:34:45.634755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7140 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.634787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634809] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a31600 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.634889] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a28030 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.634910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.634940] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634960] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.634987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.634996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.635005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.635013] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a01660 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.635044] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.635055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.635065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.635073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.635082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.635091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.635100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:54.219 [2024-12-13 06:34:45.635109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.219 [2024-12-13 06:34:45.635117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2610 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.635139] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d4bc0 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.635154] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d5440 (9): Bad file descriptor 00:28:54.219 [2024-12-13 06:34:45.636607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:54.219 [2024-12-13 06:34:45.637183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:54.219 [2024-12-13 06:34:45.637378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.219 [2024-12-13 06:34:45.637397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a01060 with addr=10.0.0.2, port=4420 00:28:54.219 [2024-12-13 06:34:45.637407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a01060 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.637523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.219 [2024-12-13 06:34:45.637538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d4bc0 with addr=10.0.0.2, port=4420 00:28:54.219 [2024-12-13 06:34:45.637547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d4bc0 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.638245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.219 [2024-12-13 06:34:45.638266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fab80 with addr=10.0.0.2, port=4420 00:28:54.219 [2024-12-13 06:34:45.638275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fab80 is same with the state(6) to be set 00:28:54.219 [2024-12-13 06:34:45.638288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01060 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.638300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d4bc0 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.638377] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.220 [2024-12-13 06:34:45.638428] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.220 [2024-12-13 06:34:45.638483] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:54.220 [2024-12-13 06:34:45.638505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fab80 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.638516] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:54.220 [2024-12-13 06:34:45.638525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:54.220 [2024-12-13 06:34:45.638534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:54.220 [2024-12-13 06:34:45.638544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:54.220 [2024-12-13 06:34:45.638553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:54.220 [2024-12-13 06:34:45.638560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:54.220 [2024-12-13 06:34:45.638569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:54.220 [2024-12-13 06:34:45.638577] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:54.220 [2024-12-13 06:34:45.638645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:54.220 [2024-12-13 06:34:45.638655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:54.220 [2024-12-13 06:34:45.638664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:54.220 [2024-12-13 06:34:45.638672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:54.220 [2024-12-13 06:34:45.644689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a31600 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.644737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01660 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.644760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e2610 (9): Bad file descriptor 00:28:54.220 [2024-12-13 06:34:45.644905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.644922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.644943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.644954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.644967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.644977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.644990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.220 [2024-12-13 06:34:45.645613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.220 [2024-12-13 06:34:45.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.645984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.645995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.646361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.646372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1afc940 is same with the state(6) to be set 00:28:54.221 [2024-12-13 06:34:45.647846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.647866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.647882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.647893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.647905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.221 [2024-12-13 06:34:45.647916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.221 [2024-12-13 06:34:45.647929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.647939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.647951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.647961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.647973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.647984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.647996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.222 [2024-12-13 06:34:45.648856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.222 [2024-12-13 06:34:45.648869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.648891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.648913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.648936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.648959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.648981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.648994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.649314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.649325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dad40 is same with the state(6) to be set 00:28:54.223 [2024-12-13 06:34:45.650415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.223 [2024-12-13 06:34:45.650713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.223 [2024-12-13 06:34:45.650721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.650985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.650992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.224 [2024-12-13 06:34:45.651338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.224 [2024-12-13 06:34:45.651345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.651355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.651362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.651370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.651376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.651385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.651391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.651400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.651406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.651413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17dbfb0 is same with the state(6) to be set 00:28:54.225 [2024-12-13 06:34:45.652405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.225 [2024-12-13 06:34:45.652884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.225 [2024-12-13 06:34:45.652891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.652990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.652998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.226 [2024-12-13 06:34:45.653356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.226 [2024-12-13 06:34:45.653363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af5eb0 is same with the state(6) to be set 00:28:54.226 [2024-12-13 06:34:45.654330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:54.226 [2024-12-13 06:34:45.654347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:54.226 [2024-12-13 06:34:45.654357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:54.226 [2024-12-13 06:34:45.654367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:54.226 [2024-12-13 06:34:45.654746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.226 [2024-12-13 06:34:45.654763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7140 with addr=10.0.0.2, port=4420 00:28:54.226 [2024-12-13 06:34:45.654772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7140 is same with the state(6) to be set 00:28:54.226 [2024-12-13 06:34:45.654985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.226 [2024-12-13 06:34:45.654995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6cd0 with addr=10.0.0.2, port=4420 00:28:54.226 [2024-12-13 06:34:45.655002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(6) to be set 00:28:54.226 [2024-12-13 06:34:45.655219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.226 [2024-12-13 06:34:45.655229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d5440 with addr=10.0.0.2, port=4420 00:28:54.226 [2024-12-13 06:34:45.655237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5440 is same with the state(6) to be set 00:28:54.226 [2024-12-13 06:34:45.655458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.226 [2024-12-13 06:34:45.655469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a28030 with addr=10.0.0.2, port=4420 00:28:54.226 [2024-12-13 06:34:45.655480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a28030 is same with the state(6) to be set 00:28:54.226 [2024-12-13 06:34:45.656350] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:54.226 [2024-12-13 06:34:45.656366] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:54.227 [2024-12-13 06:34:45.656376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:54.227 [2024-12-13 06:34:45.656401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7140 (9): Bad file descriptor 00:28:54.227 [2024-12-13 06:34:45.656411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:28:54.227 [2024-12-13 06:34:45.656419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d5440 (9): Bad file descriptor 00:28:54.227 [2024-12-13 06:34:45.656428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a28030 (9): Bad file descriptor 00:28:54.227 [2024-12-13 06:34:45.656752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.227 [2024-12-13 06:34:45.656768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d4bc0 with addr=10.0.0.2, port=4420 00:28:54.227 [2024-12-13 06:34:45.656775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d4bc0 is same with the state(6) to be set 00:28:54.227 [2024-12-13 06:34:45.656940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.227 [2024-12-13 06:34:45.656951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a01060 with addr=10.0.0.2, port=4420 00:28:54.227 [2024-12-13 06:34:45.656958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a01060 is same with the state(6) to be set 00:28:54.227 [2024-12-13 06:34:45.657100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.227 [2024-12-13 06:34:45.657110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fab80 with addr=10.0.0.2, port=4420 00:28:54.227 [2024-12-13 06:34:45.657117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fab80 is same with the state(6) to be set 00:28:54.227 [2024-12-13 06:34:45.657124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:54.227 [2024-12-13 06:34:45.657129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:54.227 [2024-12-13 06:34:45.657137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:54.227 [2024-12-13 06:34:45.657145] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:54.227 [2024-12-13 06:34:45.657152] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:54.227 [2024-12-13 06:34:45.657158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:54.227 [2024-12-13 06:34:45.657163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:54.227 [2024-12-13 06:34:45.657169] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:54.227 [2024-12-13 06:34:45.657176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:54.227 [2024-12-13 06:34:45.657182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:54.227 [2024-12-13 06:34:45.657188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:54.227 [2024-12-13 06:34:45.657194] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:54.227 [2024-12-13 06:34:45.657204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:54.227 [2024-12-13 06:34:45.657210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:54.227 [2024-12-13 06:34:45.657216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:54.227 [2024-12-13 06:34:45.657222] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:54.227 [2024-12-13 06:34:45.657266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.227 [2024-12-13 06:34:45.657673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.227 [2024-12-13 06:34:45.657680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.657989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.657997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.658089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.658096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26d8930 is same with the state(6) to be set 00:28:54.228 [2024-12-13 06:34:45.659048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.228 [2024-12-13 06:34:45.659212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.228 [2024-12-13 06:34:45.659220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.229 [2024-12-13 06:34:45.659749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.229 [2024-12-13 06:34:45.659756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.659916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.659923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x29261c0 is same with the state(6) to be set 00:28:54.230 [2024-12-13 06:34:45.660883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.660990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.660999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.230 [2024-12-13 06:34:45.661312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.230 [2024-12-13 06:34:45.661320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.231 [2024-12-13 06:34:45.661818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:54.231 [2024-12-13 06:34:45.661824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af4b80 is same with the state(6) to be set 00:28:54.231 [2024-12-13 06:34:45.662785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:28:54.231 [2024-12-13 06:34:45.662802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:28:54.231 task offset: 24576 on job bdev=Nvme5n1 fails 00:28:54.231 00:28:54.231 Latency(us) 00:28:54.231 [2024-12-13T05:34:45.885Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.231 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.231 Job: Nvme1n1 ended in about 0.79 seconds with error 00:28:54.231 Verification LBA range: start 0x0 length 0x400 00:28:54.231 Nvme1n1 : 0.79 243.52 15.22 81.17 0.00 194892.31 15541.39 212711.13 00:28:54.231 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.231 Job: Nvme2n1 ended in about 0.79 seconds with error 00:28:54.231 Verification LBA range: start 0x0 length 0x400 00:28:54.231 Nvme2n1 : 0.79 161.75 10.11 80.87 0.00 255728.23 18849.40 223696.21 00:28:54.231 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.231 Job: Nvme3n1 ended in about 0.79 seconds with error 00:28:54.231 Verification LBA range: start 0x0 length 0x400 00:28:54.231 Nvme3n1 : 0.79 248.32 15.52 80.67 0.00 184756.16 15915.89 212711.13 00:28:54.231 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.231 Job: Nvme4n1 ended in about 0.78 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme4n1 : 0.78 253.46 15.84 82.34 0.00 177019.90 8925.38 211712.49 00:28:54.232 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme5n1 ended in about 0.76 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme5n1 : 0.76 252.81 15.80 84.27 0.00 172236.46 2855.50 217704.35 00:28:54.232 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme6n1 ended in about 0.78 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme6n1 : 0.78 245.07 15.32 7.74 0.00 220977.65 15978.30 209715.20 00:28:54.232 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme7n1 ended in about 0.80 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme7n1 : 0.80 170.01 10.63 70.00 0.00 231933.40 25964.74 207717.91 00:28:54.232 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme8n1 ended in about 0.80 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme8n1 : 0.80 165.88 10.37 73.59 0.00 227362.30 13294.45 208716.56 00:28:54.232 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme9n1 ended in about 0.80 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme9n1 : 0.80 159.26 9.95 79.63 0.00 224021.29 34453.21 217704.35 00:28:54.232 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:54.232 Job: Nvme10n1 ended in about 0.80 seconds with error 00:28:54.232 Verification LBA range: start 0x0 length 0x400 00:28:54.232 Nvme10n1 : 0.80 160.95 10.06 80.48 0.00 216115.36 19223.89 234681.30 00:28:54.232 [2024-12-13T05:34:45.886Z] =================================================================================================================== 00:28:54.232 [2024-12-13T05:34:45.886Z] Total : 2061.05 128.82 720.77 0.00 207082.63 2855.50 234681.30 00:28:54.232 [2024-12-13 06:34:45.694000] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:54.232 [2024-12-13 06:34:45.694048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.694107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d4bc0 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.694121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01060 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.694130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fab80 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.694508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.694527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14e2610 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.694537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14e2610 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.694687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.694697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a01660 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.694704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a01660 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.694946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.694956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a31600 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.694963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a31600 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.694971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.694977] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.694985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.694999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.695007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.695013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.695019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.695025] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.695032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.695037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.695043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.695048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.695797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14e2610 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.695813] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01660 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.695822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a31600 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.695866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:28:54.232 [2024-12-13 06:34:45.695957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.695963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.695970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.695977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.695983] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.695989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.695995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.696001] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.696007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:28:54.232 [2024-12-13 06:34:45.696013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:28:54.232 [2024-12-13 06:34:45.696022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:28:54.232 [2024-12-13 06:34:45.696028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:28:54.232 [2024-12-13 06:34:45.696277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.696289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a28030 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.696297] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a28030 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.696438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.696454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d5440 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.696461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d5440 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.696678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.696688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d6cd0 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.696695] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d6cd0 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.696920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.696930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d7140 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.696937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d7140 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.697112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.697122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19fab80 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.697128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19fab80 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.697296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.697306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a01060 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.697313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a01060 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.697393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:54.232 [2024-12-13 06:34:45.697403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d4bc0 with addr=10.0.0.2, port=4420 00:28:54.232 [2024-12-13 06:34:45.697410] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d4bc0 is same with the state(6) to be set 00:28:54.232 [2024-12-13 06:34:45.697438] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a28030 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.697451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d5440 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.697459] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d6cd0 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.697467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d7140 (9): Bad file descriptor 00:28:54.232 [2024-12-13 06:34:45.697475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fab80 (9): Bad file descriptor 00:28:54.233 [2024-12-13 06:34:45.697483] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a01060 (9): Bad file descriptor 00:28:54.233 [2024-12-13 06:34:45.697491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d4bc0 (9): Bad file descriptor 00:28:54.233 [2024-12-13 06:34:45.697515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697541] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697559] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697582] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697605] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697627] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:28:54.233 [2024-12-13 06:34:45.697657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:28:54.233 [2024-12-13 06:34:45.697663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:28:54.233 [2024-12-13 06:34:45.697669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:28:54.233 [2024-12-13 06:34:45.697674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:28:54.492 06:34:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1095332 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1095332 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1095332 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:55.429 rmmod nvme_tcp 00:28:55.429 rmmod nvme_fabrics 00:28:55.429 rmmod nvme_keyring 00:28:55.429 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 1095227 ']' 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 1095227 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1095227 ']' 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1095227 00:28:55.689 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1095227) - No such process 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1095227 is not found' 00:28:55.689 Process with pid 1095227 is not found 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:55.689 06:34:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:57.595 00:28:57.595 real 0m7.084s 00:28:57.595 user 0m16.184s 00:28:57.595 sys 0m1.230s 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:57.595 ************************************ 00:28:57.595 END TEST nvmf_shutdown_tc3 00:28:57.595 ************************************ 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:57.595 ************************************ 00:28:57.595 START TEST nvmf_shutdown_tc4 00:28:57.595 ************************************ 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.595 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:57.855 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:57.855 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:57.855 Found net devices under 0000:af:00.0: cvl_0_0 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:57.855 Found net devices under 0000:af:00.1: cvl_0_1 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:57.855 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:57.856 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:58.115 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:58.115 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.386 ms 00:28:58.115 00:28:58.115 --- 10.0.0.2 ping statistics --- 00:28:58.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.115 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:58.115 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:58.115 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.213 ms 00:28:58.115 00:28:58.115 --- 10.0.0.1 ping statistics --- 00:28:58.115 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:58.115 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=1096515 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 1096515 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1096515 ']' 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:58.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:58.115 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.115 [2024-12-13 06:34:49.708556] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:28:58.115 [2024-12-13 06:34:49.708606] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:58.375 [2024-12-13 06:34:49.789111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:58.375 [2024-12-13 06:34:49.811322] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:58.375 [2024-12-13 06:34:49.811363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:58.375 [2024-12-13 06:34:49.811371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:58.375 [2024-12-13 06:34:49.811376] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:58.375 [2024-12-13 06:34:49.811381] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:58.375 [2024-12-13 06:34:49.812845] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:58.375 [2024-12-13 06:34:49.812931] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:58.375 [2024-12-13 06:34:49.813017] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:58.375 [2024-12-13 06:34:49.813018] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.375 [2024-12-13 06:34:49.952818] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.375 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.634 Malloc1 00:28:58.634 [2024-12-13 06:34:50.068611] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:58.634 Malloc2 00:28:58.634 Malloc3 00:28:58.634 Malloc4 00:28:58.634 Malloc5 00:28:58.634 Malloc6 00:28:58.893 Malloc7 00:28:58.893 Malloc8 00:28:58.893 Malloc9 00:28:58.893 Malloc10 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1096686 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:28:58.893 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:28:59.152 [2024-12-13 06:34:50.568789] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1096515 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096515 ']' 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096515 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1096515 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1096515' 00:29:04.429 killing process with pid 1096515 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1096515 00:29:04.429 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1096515 00:29:04.429 [2024-12-13 06:34:55.562888] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.562950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.562958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.562965] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.562971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.562977] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fd960 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563787] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563813] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563821] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563846] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 [2024-12-13 06:34:55.563852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18fde30 is same with the state(6) to be set 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 starting I/O failed: -6 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.429 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 [2024-12-13 06:34:55.575631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 [2024-12-13 06:34:55.576558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 [2024-12-13 06:34:55.577538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 Write completed with error (sct=0, sc=8) 00:29:04.430 starting I/O failed: -6 00:29:04.430 [2024-12-13 06:34:55.578180] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with Write completed with error (sct=0, sc=8) 00:29:04.430 the state(6) to be set 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 [2024-12-13 06:34:55.578218] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 [2024-12-13 06:34:55.578238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f7c10 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 [2024-12-13 06:34:55.578597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578603] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8100 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 [2024-12-13 06:34:55.578914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578945] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578952] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578958] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.431 [2024-12-13 06:34:55.578964] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 NVMe io qpair process completion error 00:29:04.431 [2024-12-13 06:34:55.578971] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578978] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 [2024-12-13 06:34:55.578990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f85f0 is same with the state(6) to be set 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 [2024-12-13 06:34:55.579938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 starting I/O failed: -6 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.431 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 [2024-12-13 06:34:55.580866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 [2024-12-13 06:34:55.581824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 [2024-12-13 06:34:55.582559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8e60 is same with the state(6) to be set 00:29:04.432 [2024-12-13 06:34:55.582576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8e60 is same with Write completed with error (sct=0, sc=8) 00:29:04.432 the state(6) to be set 00:29:04.432 starting I/O failed: -6 00:29:04.432 [2024-12-13 06:34:55.582584] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8e60 is same with the state(6) to be set 00:29:04.432 [2024-12-13 06:34:55.582592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8e60 is same with the state(6) to be set 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 [2024-12-13 06:34:55.582598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f8e60 is same with the state(6) to be set 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.432 starting I/O failed: -6 00:29:04.432 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 [2024-12-13 06:34:55.583509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.433 NVMe io qpair process completion error 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 [2024-12-13 06:34:55.584413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 [2024-12-13 06:34:55.585298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 starting I/O failed: -6 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.433 [2024-12-13 06:34:55.586312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.433 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 [2024-12-13 06:34:55.588146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.434 NVMe io qpair process completion error 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 [2024-12-13 06:34:55.589096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.434 starting I/O failed: -6 00:29:04.434 starting I/O failed: -6 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 starting I/O failed: -6 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.434 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 [2024-12-13 06:34:55.590048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 [2024-12-13 06:34:55.591063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.435 Write completed with error (sct=0, sc=8) 00:29:04.435 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 [2024-12-13 06:34:55.593161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.436 NVMe io qpair process completion error 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 [2024-12-13 06:34:55.594182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 [2024-12-13 06:34:55.594986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 [2024-12-13 06:34:55.596011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.436 starting I/O failed: -6 00:29:04.436 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 [2024-12-13 06:34:55.602071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.437 NVMe io qpair process completion error 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 [2024-12-13 06:34:55.603346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 Write completed with error (sct=0, sc=8) 00:29:04.437 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 [2024-12-13 06:34:55.604237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 [2024-12-13 06:34:55.605486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.438 starting I/O failed: -6 00:29:04.438 [2024-12-13 06:34:55.607224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.438 NVMe io qpair process completion error 00:29:04.438 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 [2024-12-13 06:34:55.608421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 [2024-12-13 06:34:55.609281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 [2024-12-13 06:34:55.610283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.439 starting I/O failed: -6 00:29:04.439 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 [2024-12-13 06:34:55.611839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.440 NVMe io qpair process completion error 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 [2024-12-13 06:34:55.612805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 [2024-12-13 06:34:55.613593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.440 starting I/O failed: -6 00:29:04.440 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 [2024-12-13 06:34:55.614626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 [2024-12-13 06:34:55.619171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.441 NVMe io qpair process completion error 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 starting I/O failed: -6 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.441 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 [2024-12-13 06:34:55.620154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 [2024-12-13 06:34:55.620953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 [2024-12-13 06:34:55.622007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.442 Write completed with error (sct=0, sc=8) 00:29:04.442 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 [2024-12-13 06:34:55.626215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.443 NVMe io qpair process completion error 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 [2024-12-13 06:34:55.627218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 [2024-12-13 06:34:55.628119] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.443 Write completed with error (sct=0, sc=8) 00:29:04.443 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 [2024-12-13 06:34:55.629131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 Write completed with error (sct=0, sc=8) 00:29:04.444 starting I/O failed: -6 00:29:04.444 [2024-12-13 06:34:55.631626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:04.444 NVMe io qpair process completion error 00:29:04.444 Initializing NVMe Controllers 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:04.444 Controller IO queue size 128, less than required. 00:29:04.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:04.444 Controller IO queue size 128, less than required. 00:29:04.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:04.444 Controller IO queue size 128, less than required. 00:29:04.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:04.444 Controller IO queue size 128, less than required. 00:29:04.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:04.444 Controller IO queue size 128, less than required. 00:29:04.444 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.444 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:04.445 Controller IO queue size 128, less than required. 00:29:04.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:04.445 Controller IO queue size 128, less than required. 00:29:04.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:04.445 Controller IO queue size 128, less than required. 00:29:04.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:04.445 Controller IO queue size 128, less than required. 00:29:04.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.445 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:04.445 Controller IO queue size 128, less than required. 00:29:04.445 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:04.445 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:04.445 Initialization complete. Launching workers. 00:29:04.445 ======================================================== 00:29:04.445 Latency(us) 00:29:04.445 Device Information : IOPS MiB/s Average min max 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2208.27 94.89 57971.27 855.70 111687.51 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2198.41 94.46 58242.40 801.08 113571.22 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2217.06 95.26 57799.30 703.42 104904.47 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2213.85 95.13 57927.07 698.12 123391.45 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2207.84 94.87 57400.14 970.32 101214.73 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2195.63 94.34 57729.15 923.40 98415.64 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2126.82 91.39 59607.94 891.87 97339.48 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2161.11 92.86 58678.60 650.41 97601.86 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2211.49 95.02 57359.36 818.49 101937.26 00:29:04.445 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2221.56 95.46 57169.39 691.35 109528.69 00:29:04.445 ======================================================== 00:29:04.445 Total : 21962.04 943.68 57980.34 650.41 123391.45 00:29:04.445 00:29:04.445 [2024-12-13 06:34:55.634575] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0cc0 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f880 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634654] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1320 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a1650 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634709] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f190 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149efb0 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a4b30 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f370 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x149f550 is same with the state(6) to be set 00:29:04.445 [2024-12-13 06:34:55.634851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14a0ff0 is same with the state(6) to be set 00:29:04.445 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:04.445 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1096686 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1096686 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1096686 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.383 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.383 rmmod nvme_tcp 00:29:05.383 rmmod nvme_fabrics 00:29:05.383 rmmod nvme_keyring 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 1096515 ']' 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 1096515 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1096515 ']' 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1096515 00:29:05.383 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1096515) - No such process 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1096515 is not found' 00:29:05.383 Process with pid 1096515 is not found 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.383 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.384 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.384 06:34:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.918 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.919 00:29:07.919 real 0m9.858s 00:29:07.919 user 0m24.921s 00:29:07.919 sys 0m5.204s 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:07.919 ************************************ 00:29:07.919 END TEST nvmf_shutdown_tc4 00:29:07.919 ************************************ 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:07.919 00:29:07.919 real 0m39.848s 00:29:07.919 user 1m35.775s 00:29:07.919 sys 0m13.824s 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.919 ************************************ 00:29:07.919 END TEST nvmf_shutdown 00:29:07.919 ************************************ 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:07.919 ************************************ 00:29:07.919 START TEST nvmf_nsid 00:29:07.919 ************************************ 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:07.919 * Looking for test storage... 00:29:07.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:07.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.919 --rc genhtml_branch_coverage=1 00:29:07.919 --rc genhtml_function_coverage=1 00:29:07.919 --rc genhtml_legend=1 00:29:07.919 --rc geninfo_all_blocks=1 00:29:07.919 --rc geninfo_unexecuted_blocks=1 00:29:07.919 00:29:07.919 ' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:07.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.919 --rc genhtml_branch_coverage=1 00:29:07.919 --rc genhtml_function_coverage=1 00:29:07.919 --rc genhtml_legend=1 00:29:07.919 --rc geninfo_all_blocks=1 00:29:07.919 --rc geninfo_unexecuted_blocks=1 00:29:07.919 00:29:07.919 ' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:07.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.919 --rc genhtml_branch_coverage=1 00:29:07.919 --rc genhtml_function_coverage=1 00:29:07.919 --rc genhtml_legend=1 00:29:07.919 --rc geninfo_all_blocks=1 00:29:07.919 --rc geninfo_unexecuted_blocks=1 00:29:07.919 00:29:07.919 ' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:07.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:07.919 --rc genhtml_branch_coverage=1 00:29:07.919 --rc genhtml_function_coverage=1 00:29:07.919 --rc genhtml_legend=1 00:29:07.919 --rc geninfo_all_blocks=1 00:29:07.919 --rc geninfo_unexecuted_blocks=1 00:29:07.919 00:29:07.919 ' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.919 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:07.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.920 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:14.489 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:14.489 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:14.489 Found net devices under 0000:af:00.0: cvl_0_0 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:14.489 Found net devices under 0000:af:00.1: cvl_0_1 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.489 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.355 ms 00:29:14.490 00:29:14.490 --- 10.0.0.2 ping statistics --- 00:29:14.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.490 rtt min/avg/max/mdev = 0.355/0.355/0.355/0.000 ms 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:29:14.490 00:29:14.490 --- 10.0.0.1 ping statistics --- 00:29:14.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.490 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=1101152 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 1101152 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1101152 ']' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.490 [2024-12-13 06:35:05.413820] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:14.490 [2024-12-13 06:35:05.413863] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:14.490 [2024-12-13 06:35:05.495643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.490 [2024-12-13 06:35:05.516324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:14.490 [2024-12-13 06:35:05.516362] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:14.490 [2024-12-13 06:35:05.516369] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:14.490 [2024-12-13 06:35:05.516374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:14.490 [2024-12-13 06:35:05.516379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:14.490 [2024-12-13 06:35:05.516876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1101171 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=d6dd8df0-1312-4067-90a1-d65ff9747a22 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3d7a99c9-b1b1-42ba-8c9e-67f77488d047 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=dac14a27-add4-4c46-a1bc-74c4ec757fd8 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.490 null0 00:29:14.490 null1 00:29:14.490 [2024-12-13 06:35:05.703127] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:14.490 [2024-12-13 06:35:05.703170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1101171 ] 00:29:14.490 null2 00:29:14.490 [2024-12-13 06:35:05.711652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.490 [2024-12-13 06:35:05.735868] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1101171 /var/tmp/tgt2.sock 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1101171 ']' 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:14.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:14.490 [2024-12-13 06:35:05.778616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.490 [2024-12-13 06:35:05.801010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:14.490 06:35:05 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:14.749 [2024-12-13 06:35:06.318384] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:14.749 [2024-12-13 06:35:06.334474] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:14.749 nvme0n1 nvme0n2 00:29:14.749 nvme1n1 00:29:14.749 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:14.749 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:14.749 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:16.124 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid d6dd8df0-1312-4067-90a1-d65ff9747a22 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=d6dd8df01312406790a1d65ff9747a22 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo D6DD8DF01312406790A1D65FF9747A22 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ D6DD8DF01312406790A1D65FF9747A22 == \D\6\D\D\8\D\F\0\1\3\1\2\4\0\6\7\9\0\A\1\D\6\5\F\F\9\7\4\7\A\2\2 ]] 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3d7a99c9-b1b1-42ba-8c9e-67f77488d047 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:17.058 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3d7a99c9b1b142ba8c9e67f77488d047 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3D7A99C9B1B142BA8C9E67F77488D047 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3D7A99C9B1B142BA8C9E67F77488D047 == \3\D\7\A\9\9\C\9\B\1\B\1\4\2\B\A\8\C\9\E\6\7\F\7\7\4\8\8\D\0\4\7 ]] 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid dac14a27-add4-4c46-a1bc-74c4ec757fd8 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=dac14a27add44c46a1bc74c4ec757fd8 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo DAC14A27ADD44C46A1BC74C4EC757FD8 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ DAC14A27ADD44C46A1BC74C4EC757FD8 == \D\A\C\1\4\A\2\7\A\D\D\4\4\C\4\6\A\1\B\C\7\4\C\4\E\C\7\5\7\F\D\8 ]] 00:29:17.059 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1101171 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1101171 ']' 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1101171 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101171 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101171' 00:29:17.317 killing process with pid 1101171 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1101171 00:29:17.317 06:35:08 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1101171 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:17.576 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:17.576 rmmod nvme_tcp 00:29:17.576 rmmod nvme_fabrics 00:29:17.835 rmmod nvme_keyring 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 1101152 ']' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1101152 ']' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1101152' 00:29:17.835 killing process with pid 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1101152 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:17.835 06:35:09 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:20.370 00:29:20.370 real 0m12.320s 00:29:20.370 user 0m9.520s 00:29:20.370 sys 0m5.499s 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:20.370 ************************************ 00:29:20.370 END TEST nvmf_nsid 00:29:20.370 ************************************ 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:20.370 00:29:20.370 real 18m31.376s 00:29:20.370 user 49m3.935s 00:29:20.370 sys 4m36.835s 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:20.370 06:35:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:20.370 ************************************ 00:29:20.370 END TEST nvmf_target_extra 00:29:20.370 ************************************ 00:29:20.370 06:35:11 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.370 06:35:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.370 06:35:11 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.370 06:35:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:20.370 ************************************ 00:29:20.370 START TEST nvmf_host 00:29:20.370 ************************************ 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:20.370 * Looking for test storage... 00:29:20.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:20.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.370 --rc genhtml_branch_coverage=1 00:29:20.370 --rc genhtml_function_coverage=1 00:29:20.370 --rc genhtml_legend=1 00:29:20.370 --rc geninfo_all_blocks=1 00:29:20.370 --rc geninfo_unexecuted_blocks=1 00:29:20.370 00:29:20.370 ' 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:20.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.370 --rc genhtml_branch_coverage=1 00:29:20.370 --rc genhtml_function_coverage=1 00:29:20.370 --rc genhtml_legend=1 00:29:20.370 --rc geninfo_all_blocks=1 00:29:20.370 --rc geninfo_unexecuted_blocks=1 00:29:20.370 00:29:20.370 ' 00:29:20.370 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:20.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.370 --rc genhtml_branch_coverage=1 00:29:20.370 --rc genhtml_function_coverage=1 00:29:20.371 --rc genhtml_legend=1 00:29:20.371 --rc geninfo_all_blocks=1 00:29:20.371 --rc geninfo_unexecuted_blocks=1 00:29:20.371 00:29:20.371 ' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:20.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.371 --rc genhtml_branch_coverage=1 00:29:20.371 --rc genhtml_function_coverage=1 00:29:20.371 --rc genhtml_legend=1 00:29:20.371 --rc geninfo_all_blocks=1 00:29:20.371 --rc geninfo_unexecuted_blocks=1 00:29:20.371 00:29:20.371 ' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.371 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:20.371 ************************************ 00:29:20.371 START TEST nvmf_multicontroller 00:29:20.371 ************************************ 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:20.371 * Looking for test storage... 00:29:20.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:20.371 06:35:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.631 --rc genhtml_branch_coverage=1 00:29:20.631 --rc genhtml_function_coverage=1 00:29:20.631 --rc genhtml_legend=1 00:29:20.631 --rc geninfo_all_blocks=1 00:29:20.631 --rc geninfo_unexecuted_blocks=1 00:29:20.631 00:29:20.631 ' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.631 --rc genhtml_branch_coverage=1 00:29:20.631 --rc genhtml_function_coverage=1 00:29:20.631 --rc genhtml_legend=1 00:29:20.631 --rc geninfo_all_blocks=1 00:29:20.631 --rc geninfo_unexecuted_blocks=1 00:29:20.631 00:29:20.631 ' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.631 --rc genhtml_branch_coverage=1 00:29:20.631 --rc genhtml_function_coverage=1 00:29:20.631 --rc genhtml_legend=1 00:29:20.631 --rc geninfo_all_blocks=1 00:29:20.631 --rc geninfo_unexecuted_blocks=1 00:29:20.631 00:29:20.631 ' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:20.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:20.631 --rc genhtml_branch_coverage=1 00:29:20.631 --rc genhtml_function_coverage=1 00:29:20.631 --rc genhtml_legend=1 00:29:20.631 --rc geninfo_all_blocks=1 00:29:20.631 --rc geninfo_unexecuted_blocks=1 00:29:20.631 00:29:20.631 ' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:20.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:20.631 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:20.632 06:35:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:27.317 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:27.317 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:27.317 Found net devices under 0000:af:00.0: cvl_0_0 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:27.317 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:27.318 Found net devices under 0000:af:00.1: cvl_0_1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:27.318 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:27.318 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:29:27.318 00:29:27.318 --- 10.0.0.2 ping statistics --- 00:29:27.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.318 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:27.318 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:27.318 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:29:27.318 00:29:27.318 --- 10.0.0.1 ping statistics --- 00:29:27.318 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:27.318 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=1105399 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 1105399 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105399 ']' 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:27.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.318 06:35:17 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 [2024-12-13 06:35:18.037750] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:27.318 [2024-12-13 06:35:18.037796] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:27.318 [2024-12-13 06:35:18.118051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:27.318 [2024-12-13 06:35:18.140957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:27.318 [2024-12-13 06:35:18.140995] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:27.318 [2024-12-13 06:35:18.141003] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:27.318 [2024-12-13 06:35:18.141009] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:27.318 [2024-12-13 06:35:18.141014] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:27.318 [2024-12-13 06:35:18.142201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:27.318 [2024-12-13 06:35:18.142311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.318 [2024-12-13 06:35:18.142312] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 [2024-12-13 06:35:18.281929] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 Malloc0 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.318 [2024-12-13 06:35:18.352086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:27.318 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 [2024-12-13 06:35:18.364029] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 Malloc1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1105430 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1105430 /var/tmp/bdevperf.sock 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1105430 ']' 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:27.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 NVMe0n1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.319 1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 request: 00:29:27.319 { 00:29:27.319 "name": "NVMe0", 00:29:27.319 "trtype": "tcp", 00:29:27.319 "traddr": "10.0.0.2", 00:29:27.319 "adrfam": "ipv4", 00:29:27.319 "trsvcid": "4420", 00:29:27.319 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.319 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:27.319 "hostaddr": "10.0.0.1", 00:29:27.319 "prchk_reftag": false, 00:29:27.319 "prchk_guard": false, 00:29:27.319 "hdgst": false, 00:29:27.319 "ddgst": false, 00:29:27.319 "allow_unrecognized_csi": false, 00:29:27.319 "method": "bdev_nvme_attach_controller", 00:29:27.319 "req_id": 1 00:29:27.319 } 00:29:27.319 Got JSON-RPC error response 00:29:27.319 response: 00:29:27.319 { 00:29:27.319 "code": -114, 00:29:27.319 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:27.319 } 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.319 request: 00:29:27.319 { 00:29:27.319 "name": "NVMe0", 00:29:27.319 "trtype": "tcp", 00:29:27.319 "traddr": "10.0.0.2", 00:29:27.319 "adrfam": "ipv4", 00:29:27.319 "trsvcid": "4420", 00:29:27.319 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:27.319 "hostaddr": "10.0.0.1", 00:29:27.319 "prchk_reftag": false, 00:29:27.319 "prchk_guard": false, 00:29:27.319 "hdgst": false, 00:29:27.319 "ddgst": false, 00:29:27.319 "allow_unrecognized_csi": false, 00:29:27.319 "method": "bdev_nvme_attach_controller", 00:29:27.319 "req_id": 1 00:29:27.319 } 00:29:27.319 Got JSON-RPC error response 00:29:27.319 response: 00:29:27.319 { 00:29:27.319 "code": -114, 00:29:27.319 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:27.319 } 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.319 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.320 request: 00:29:27.320 { 00:29:27.320 "name": "NVMe0", 00:29:27.320 "trtype": "tcp", 00:29:27.320 "traddr": "10.0.0.2", 00:29:27.320 "adrfam": "ipv4", 00:29:27.320 "trsvcid": "4420", 00:29:27.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.320 "hostaddr": "10.0.0.1", 00:29:27.320 "prchk_reftag": false, 00:29:27.320 "prchk_guard": false, 00:29:27.320 "hdgst": false, 00:29:27.320 "ddgst": false, 00:29:27.320 "multipath": "disable", 00:29:27.320 "allow_unrecognized_csi": false, 00:29:27.320 "method": "bdev_nvme_attach_controller", 00:29:27.320 "req_id": 1 00:29:27.320 } 00:29:27.320 Got JSON-RPC error response 00:29:27.320 response: 00:29:27.320 { 00:29:27.320 "code": -114, 00:29:27.320 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:27.320 } 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.320 request: 00:29:27.320 { 00:29:27.320 "name": "NVMe0", 00:29:27.320 "trtype": "tcp", 00:29:27.320 "traddr": "10.0.0.2", 00:29:27.320 "adrfam": "ipv4", 00:29:27.320 "trsvcid": "4420", 00:29:27.320 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.320 "hostaddr": "10.0.0.1", 00:29:27.320 "prchk_reftag": false, 00:29:27.320 "prchk_guard": false, 00:29:27.320 "hdgst": false, 00:29:27.320 "ddgst": false, 00:29:27.320 "multipath": "failover", 00:29:27.320 "allow_unrecognized_csi": false, 00:29:27.320 "method": "bdev_nvme_attach_controller", 00:29:27.320 "req_id": 1 00:29:27.320 } 00:29:27.320 Got JSON-RPC error response 00:29:27.320 response: 00:29:27.320 { 00:29:27.320 "code": -114, 00:29:27.320 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:27.320 } 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.320 06:35:18 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.579 NVMe0n1 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.579 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:27.579 06:35:19 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:28.956 { 00:29:28.956 "results": [ 00:29:28.956 { 00:29:28.956 "job": "NVMe0n1", 00:29:28.956 "core_mask": "0x1", 00:29:28.956 "workload": "write", 00:29:28.956 "status": "finished", 00:29:28.956 "queue_depth": 128, 00:29:28.956 "io_size": 4096, 00:29:28.956 "runtime": 1.003803, 00:29:28.956 "iops": 25179.243337587155, 00:29:28.956 "mibps": 98.35641928744982, 00:29:28.956 "io_failed": 0, 00:29:28.956 "io_timeout": 0, 00:29:28.956 "avg_latency_us": 5077.338243285762, 00:29:28.956 "min_latency_us": 4337.8590476190475, 00:29:28.956 "max_latency_us": 10485.76 00:29:28.956 } 00:29:28.956 ], 00:29:28.956 "core_count": 1 00:29:28.956 } 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105430 ']' 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105430' 00:29:28.956 killing process with pid 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105430 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:28.956 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:28.956 [2024-12-13 06:35:18.468989] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:28.956 [2024-12-13 06:35:18.469033] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105430 ] 00:29:28.956 [2024-12-13 06:35:18.545226] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:28.956 [2024-12-13 06:35:18.567438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:28.956 [2024-12-13 06:35:19.158803] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 51f6c276-a025-4b29-adfe-eb14283ba85e already exists 00:29:28.956 [2024-12-13 06:35:19.158831] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:51f6c276-a025-4b29-adfe-eb14283ba85e alias for bdev NVMe1n1 00:29:28.956 [2024-12-13 06:35:19.158839] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:28.956 Running I/O for 1 seconds... 00:29:28.956 25147.00 IOPS, 98.23 MiB/s 00:29:28.956 Latency(us) 00:29:28.956 [2024-12-13T05:35:20.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.956 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:28.956 NVMe0n1 : 1.00 25179.24 98.36 0.00 0.00 5077.34 4337.86 10485.76 00:29:28.956 [2024-12-13T05:35:20.610Z] =================================================================================================================== 00:29:28.956 [2024-12-13T05:35:20.610Z] Total : 25179.24 98.36 0.00 0.00 5077.34 4337.86 10485.76 00:29:28.956 Received shutdown signal, test time was about 1.000000 seconds 00:29:28.956 00:29:28.956 Latency(us) 00:29:28.956 [2024-12-13T05:35:20.610Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:28.956 [2024-12-13T05:35:20.610Z] =================================================================================================================== 00:29:28.956 [2024-12-13T05:35:20.610Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:28.956 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:28.956 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:28.956 rmmod nvme_tcp 00:29:28.956 rmmod nvme_fabrics 00:29:28.956 rmmod nvme_keyring 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 1105399 ']' 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 1105399 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1105399 ']' 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1105399 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1105399 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1105399' 00:29:29.215 killing process with pid 1105399 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1105399 00:29:29.215 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1105399 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:29.474 06:35:20 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.377 06:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:31.377 00:29:31.377 real 0m11.072s 00:29:31.377 user 0m12.118s 00:29:31.377 sys 0m5.178s 00:29:31.377 06:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:31.377 06:35:22 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:31.377 ************************************ 00:29:31.377 END TEST nvmf_multicontroller 00:29:31.377 ************************************ 00:29:31.378 06:35:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:31.378 06:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:31.378 06:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:31.378 06:35:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:31.637 ************************************ 00:29:31.637 START TEST nvmf_aer 00:29:31.637 ************************************ 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:31.637 * Looking for test storage... 00:29:31.637 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.637 --rc genhtml_branch_coverage=1 00:29:31.637 --rc genhtml_function_coverage=1 00:29:31.637 --rc genhtml_legend=1 00:29:31.637 --rc geninfo_all_blocks=1 00:29:31.637 --rc geninfo_unexecuted_blocks=1 00:29:31.637 00:29:31.637 ' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.637 --rc genhtml_branch_coverage=1 00:29:31.637 --rc genhtml_function_coverage=1 00:29:31.637 --rc genhtml_legend=1 00:29:31.637 --rc geninfo_all_blocks=1 00:29:31.637 --rc geninfo_unexecuted_blocks=1 00:29:31.637 00:29:31.637 ' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.637 --rc genhtml_branch_coverage=1 00:29:31.637 --rc genhtml_function_coverage=1 00:29:31.637 --rc genhtml_legend=1 00:29:31.637 --rc geninfo_all_blocks=1 00:29:31.637 --rc geninfo_unexecuted_blocks=1 00:29:31.637 00:29:31.637 ' 00:29:31.637 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:31.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:31.637 --rc genhtml_branch_coverage=1 00:29:31.637 --rc genhtml_function_coverage=1 00:29:31.637 --rc genhtml_legend=1 00:29:31.637 --rc geninfo_all_blocks=1 00:29:31.638 --rc geninfo_unexecuted_blocks=1 00:29:31.638 00:29:31.638 ' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:31.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:31.638 06:35:23 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:38.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:38.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:38.209 Found net devices under 0000:af:00.0: cvl_0_0 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:38.209 Found net devices under 0000:af:00.1: cvl_0_1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:38.209 06:35:28 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:38.209 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:38.209 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:38.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:38.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.389 ms 00:29:38.209 00:29:38.209 --- 10.0.0.2 ping statistics --- 00:29:38.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.210 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:38.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:38.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:38.210 00:29:38.210 --- 10.0.0.1 ping statistics --- 00:29:38.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:38.210 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=1109215 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 1109215 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1109215 ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:38.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 [2024-12-13 06:35:29.122085] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:38.210 [2024-12-13 06:35:29.122135] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:38.210 [2024-12-13 06:35:29.202518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:38.210 [2024-12-13 06:35:29.226093] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:38.210 [2024-12-13 06:35:29.226130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:38.210 [2024-12-13 06:35:29.226137] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:38.210 [2024-12-13 06:35:29.226144] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:38.210 [2024-12-13 06:35:29.226149] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:38.210 [2024-12-13 06:35:29.227443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.210 [2024-12-13 06:35:29.227553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:38.210 [2024-12-13 06:35:29.227586] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.210 [2024-12-13 06:35:29.227587] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 [2024-12-13 06:35:29.360672] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 Malloc0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 [2024-12-13 06:35:29.421461] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 [ 00:29:38.210 { 00:29:38.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:38.210 "subtype": "Discovery", 00:29:38.210 "listen_addresses": [], 00:29:38.210 "allow_any_host": true, 00:29:38.210 "hosts": [] 00:29:38.210 }, 00:29:38.210 { 00:29:38.210 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.210 "subtype": "NVMe", 00:29:38.210 "listen_addresses": [ 00:29:38.210 { 00:29:38.210 "trtype": "TCP", 00:29:38.210 "adrfam": "IPv4", 00:29:38.210 "traddr": "10.0.0.2", 00:29:38.210 "trsvcid": "4420" 00:29:38.210 } 00:29:38.210 ], 00:29:38.210 "allow_any_host": true, 00:29:38.210 "hosts": [], 00:29:38.210 "serial_number": "SPDK00000000000001", 00:29:38.210 "model_number": "SPDK bdev Controller", 00:29:38.210 "max_namespaces": 2, 00:29:38.210 "min_cntlid": 1, 00:29:38.210 "max_cntlid": 65519, 00:29:38.210 "namespaces": [ 00:29:38.210 { 00:29:38.210 "nsid": 1, 00:29:38.210 "bdev_name": "Malloc0", 00:29:38.210 "name": "Malloc0", 00:29:38.210 "nguid": "E47F7DE215144D6A9C82C0ED2483957B", 00:29:38.210 "uuid": "e47f7de2-1514-4d6a-9c82-c0ed2483957b" 00:29:38.210 } 00:29:38.210 ] 00:29:38.210 } 00:29:38.210 ] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1109383 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 Malloc1 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.210 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.210 [ 00:29:38.210 { 00:29:38.210 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:38.211 "subtype": "Discovery", 00:29:38.211 "listen_addresses": [], 00:29:38.211 "allow_any_host": true, 00:29:38.211 "hosts": [] 00:29:38.211 }, 00:29:38.211 { 00:29:38.211 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:38.211 "subtype": "NVMe", 00:29:38.211 "listen_addresses": [ 00:29:38.211 { 00:29:38.211 "trtype": "TCP", 00:29:38.211 "adrfam": "IPv4", 00:29:38.211 "traddr": "10.0.0.2", 00:29:38.211 "trsvcid": "4420" 00:29:38.211 } 00:29:38.211 ], 00:29:38.211 "allow_any_host": true, 00:29:38.211 "hosts": [], 00:29:38.211 "serial_number": "SPDK00000000000001", 00:29:38.211 "model_number": "SPDK bdev Controller", 00:29:38.211 "max_namespaces": 2, 00:29:38.211 "min_cntlid": 1, 00:29:38.211 "max_cntlid": 65519, 00:29:38.211 "namespaces": [ 00:29:38.211 { 00:29:38.211 "nsid": 1, 00:29:38.211 "bdev_name": "Malloc0", 00:29:38.211 "name": "Malloc0", 00:29:38.211 Asynchronous Event Request test 00:29:38.211 Attaching to 10.0.0.2 00:29:38.211 Attached to 10.0.0.2 00:29:38.211 Registering asynchronous event callbacks... 00:29:38.211 Starting namespace attribute notice tests for all controllers... 00:29:38.211 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:38.211 aer_cb - Changed Namespace 00:29:38.211 Cleaning up... 00:29:38.211 "nguid": "E47F7DE215144D6A9C82C0ED2483957B", 00:29:38.211 "uuid": "e47f7de2-1514-4d6a-9c82-c0ed2483957b" 00:29:38.211 }, 00:29:38.211 { 00:29:38.211 "nsid": 2, 00:29:38.211 "bdev_name": "Malloc1", 00:29:38.211 "name": "Malloc1", 00:29:38.211 "nguid": "039297C7ADC44F64A947588B0B9B96C3", 00:29:38.211 "uuid": "039297c7-adc4-4f64-a947-588b0b9b96c3" 00:29:38.211 } 00:29:38.211 ] 00:29:38.211 } 00:29:38.211 ] 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1109383 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:38.211 rmmod nvme_tcp 00:29:38.211 rmmod nvme_fabrics 00:29:38.211 rmmod nvme_keyring 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 1109215 ']' 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 1109215 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1109215 ']' 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1109215 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:38.211 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1109215 00:29:38.470 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:38.470 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:38.470 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1109215' 00:29:38.470 killing process with pid 1109215 00:29:38.470 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1109215 00:29:38.470 06:35:29 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1109215 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:38.470 06:35:30 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.005 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:41.005 00:29:41.005 real 0m9.079s 00:29:41.005 user 0m4.977s 00:29:41.005 sys 0m4.773s 00:29:41.005 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:41.005 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:41.005 ************************************ 00:29:41.006 END TEST nvmf_aer 00:29:41.006 ************************************ 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:41.006 ************************************ 00:29:41.006 START TEST nvmf_async_init 00:29:41.006 ************************************ 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:41.006 * Looking for test storage... 00:29:41.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:41.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.006 --rc genhtml_branch_coverage=1 00:29:41.006 --rc genhtml_function_coverage=1 00:29:41.006 --rc genhtml_legend=1 00:29:41.006 --rc geninfo_all_blocks=1 00:29:41.006 --rc geninfo_unexecuted_blocks=1 00:29:41.006 00:29:41.006 ' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:41.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.006 --rc genhtml_branch_coverage=1 00:29:41.006 --rc genhtml_function_coverage=1 00:29:41.006 --rc genhtml_legend=1 00:29:41.006 --rc geninfo_all_blocks=1 00:29:41.006 --rc geninfo_unexecuted_blocks=1 00:29:41.006 00:29:41.006 ' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:41.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.006 --rc genhtml_branch_coverage=1 00:29:41.006 --rc genhtml_function_coverage=1 00:29:41.006 --rc genhtml_legend=1 00:29:41.006 --rc geninfo_all_blocks=1 00:29:41.006 --rc geninfo_unexecuted_blocks=1 00:29:41.006 00:29:41.006 ' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:41.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:41.006 --rc genhtml_branch_coverage=1 00:29:41.006 --rc genhtml_function_coverage=1 00:29:41.006 --rc genhtml_legend=1 00:29:41.006 --rc geninfo_all_blocks=1 00:29:41.006 --rc geninfo_unexecuted_blocks=1 00:29:41.006 00:29:41.006 ' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:41.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:41.006 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=cfecd420a2a5467f8c59ed386acb8592 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:41.007 06:35:32 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:47.585 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:47.585 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:47.585 Found net devices under 0000:af:00.0: cvl_0_0 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:47.585 Found net devices under 0000:af:00.1: cvl_0_1 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.585 06:35:37 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.585 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.585 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:29:47.585 00:29:47.585 --- 10.0.0.2 ping statistics --- 00:29:47.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.585 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.585 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.585 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:29:47.585 00:29:47.585 --- 10.0.0.1 ping statistics --- 00:29:47.585 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.585 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:29:47.585 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=1112856 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 1112856 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1112856 ']' 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 [2024-12-13 06:35:38.343899] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:47.586 [2024-12-13 06:35:38.343943] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.586 [2024-12-13 06:35:38.421997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:47.586 [2024-12-13 06:35:38.443584] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:47.586 [2024-12-13 06:35:38.443617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:47.586 [2024-12-13 06:35:38.443624] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:47.586 [2024-12-13 06:35:38.443630] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:47.586 [2024-12-13 06:35:38.443635] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:47.586 [2024-12-13 06:35:38.444106] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 [2024-12-13 06:35:38.573944] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 null0 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g cfecd420a2a5467f8c59ed386acb8592 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 [2024-12-13 06:35:38.626205] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 nvme0n1 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.586 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.586 [ 00:29:47.586 { 00:29:47.586 "name": "nvme0n1", 00:29:47.586 "aliases": [ 00:29:47.586 "cfecd420-a2a5-467f-8c59-ed386acb8592" 00:29:47.586 ], 00:29:47.586 "product_name": "NVMe disk", 00:29:47.586 "block_size": 512, 00:29:47.586 "num_blocks": 2097152, 00:29:47.586 "uuid": "cfecd420-a2a5-467f-8c59-ed386acb8592", 00:29:47.586 "numa_id": 1, 00:29:47.586 "assigned_rate_limits": { 00:29:47.586 "rw_ios_per_sec": 0, 00:29:47.586 "rw_mbytes_per_sec": 0, 00:29:47.586 "r_mbytes_per_sec": 0, 00:29:47.586 "w_mbytes_per_sec": 0 00:29:47.586 }, 00:29:47.586 "claimed": false, 00:29:47.586 "zoned": false, 00:29:47.586 "supported_io_types": { 00:29:47.586 "read": true, 00:29:47.586 "write": true, 00:29:47.586 "unmap": false, 00:29:47.586 "flush": true, 00:29:47.586 "reset": true, 00:29:47.586 "nvme_admin": true, 00:29:47.586 "nvme_io": true, 00:29:47.586 "nvme_io_md": false, 00:29:47.586 "write_zeroes": true, 00:29:47.586 "zcopy": false, 00:29:47.586 "get_zone_info": false, 00:29:47.586 "zone_management": false, 00:29:47.586 "zone_append": false, 00:29:47.586 "compare": true, 00:29:47.586 "compare_and_write": true, 00:29:47.586 "abort": true, 00:29:47.586 "seek_hole": false, 00:29:47.586 "seek_data": false, 00:29:47.586 "copy": true, 00:29:47.586 "nvme_iov_md": false 00:29:47.586 }, 00:29:47.586 "memory_domains": [ 00:29:47.586 { 00:29:47.586 "dma_device_id": "system", 00:29:47.586 "dma_device_type": 1 00:29:47.586 } 00:29:47.586 ], 00:29:47.586 "driver_specific": { 00:29:47.586 "nvme": [ 00:29:47.586 { 00:29:47.586 "trid": { 00:29:47.586 "trtype": "TCP", 00:29:47.586 "adrfam": "IPv4", 00:29:47.586 "traddr": "10.0.0.2", 00:29:47.586 "trsvcid": "4420", 00:29:47.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.586 }, 00:29:47.586 "ctrlr_data": { 00:29:47.586 "cntlid": 1, 00:29:47.586 "vendor_id": "0x8086", 00:29:47.586 "model_number": "SPDK bdev Controller", 00:29:47.586 "serial_number": "00000000000000000000", 00:29:47.586 "firmware_revision": "25.01", 00:29:47.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.586 "oacs": { 00:29:47.586 "security": 0, 00:29:47.586 "format": 0, 00:29:47.586 "firmware": 0, 00:29:47.586 "ns_manage": 0 00:29:47.586 }, 00:29:47.586 "multi_ctrlr": true, 00:29:47.586 "ana_reporting": false 00:29:47.586 }, 00:29:47.586 "vs": { 00:29:47.586 "nvme_version": "1.3" 00:29:47.586 }, 00:29:47.586 "ns_data": { 00:29:47.586 "id": 1, 00:29:47.586 "can_share": true 00:29:47.586 } 00:29:47.586 } 00:29:47.586 ], 00:29:47.586 "mp_policy": "active_passive" 00:29:47.586 } 00:29:47.586 } 00:29:47.586 ] 00:29:47.587 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:29:47.587 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 [2024-12-13 06:35:38.894746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:47.587 [2024-12-13 06:35:38.894801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20cba90 (9): Bad file descriptor 00:29:47.587 [2024-12-13 06:35:39.028526] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 [ 00:29:47.587 { 00:29:47.587 "name": "nvme0n1", 00:29:47.587 "aliases": [ 00:29:47.587 "cfecd420-a2a5-467f-8c59-ed386acb8592" 00:29:47.587 ], 00:29:47.587 "product_name": "NVMe disk", 00:29:47.587 "block_size": 512, 00:29:47.587 "num_blocks": 2097152, 00:29:47.587 "uuid": "cfecd420-a2a5-467f-8c59-ed386acb8592", 00:29:47.587 "numa_id": 1, 00:29:47.587 "assigned_rate_limits": { 00:29:47.587 "rw_ios_per_sec": 0, 00:29:47.587 "rw_mbytes_per_sec": 0, 00:29:47.587 "r_mbytes_per_sec": 0, 00:29:47.587 "w_mbytes_per_sec": 0 00:29:47.587 }, 00:29:47.587 "claimed": false, 00:29:47.587 "zoned": false, 00:29:47.587 "supported_io_types": { 00:29:47.587 "read": true, 00:29:47.587 "write": true, 00:29:47.587 "unmap": false, 00:29:47.587 "flush": true, 00:29:47.587 "reset": true, 00:29:47.587 "nvme_admin": true, 00:29:47.587 "nvme_io": true, 00:29:47.587 "nvme_io_md": false, 00:29:47.587 "write_zeroes": true, 00:29:47.587 "zcopy": false, 00:29:47.587 "get_zone_info": false, 00:29:47.587 "zone_management": false, 00:29:47.587 "zone_append": false, 00:29:47.587 "compare": true, 00:29:47.587 "compare_and_write": true, 00:29:47.587 "abort": true, 00:29:47.587 "seek_hole": false, 00:29:47.587 "seek_data": false, 00:29:47.587 "copy": true, 00:29:47.587 "nvme_iov_md": false 00:29:47.587 }, 00:29:47.587 "memory_domains": [ 00:29:47.587 { 00:29:47.587 "dma_device_id": "system", 00:29:47.587 "dma_device_type": 1 00:29:47.587 } 00:29:47.587 ], 00:29:47.587 "driver_specific": { 00:29:47.587 "nvme": [ 00:29:47.587 { 00:29:47.587 "trid": { 00:29:47.587 "trtype": "TCP", 00:29:47.587 "adrfam": "IPv4", 00:29:47.587 "traddr": "10.0.0.2", 00:29:47.587 "trsvcid": "4420", 00:29:47.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.587 }, 00:29:47.587 "ctrlr_data": { 00:29:47.587 "cntlid": 2, 00:29:47.587 "vendor_id": "0x8086", 00:29:47.587 "model_number": "SPDK bdev Controller", 00:29:47.587 "serial_number": "00000000000000000000", 00:29:47.587 "firmware_revision": "25.01", 00:29:47.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.587 "oacs": { 00:29:47.587 "security": 0, 00:29:47.587 "format": 0, 00:29:47.587 "firmware": 0, 00:29:47.587 "ns_manage": 0 00:29:47.587 }, 00:29:47.587 "multi_ctrlr": true, 00:29:47.587 "ana_reporting": false 00:29:47.587 }, 00:29:47.587 "vs": { 00:29:47.587 "nvme_version": "1.3" 00:29:47.587 }, 00:29:47.587 "ns_data": { 00:29:47.587 "id": 1, 00:29:47.587 "can_share": true 00:29:47.587 } 00:29:47.587 } 00:29:47.587 ], 00:29:47.587 "mp_policy": "active_passive" 00:29:47.587 } 00:29:47.587 } 00:29:47.587 ] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4CYPmiEnr1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4CYPmiEnr1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.4CYPmiEnr1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 [2024-12-13 06:35:39.103360] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:47.587 [2024-12-13 06:35:39.103456] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 [2024-12-13 06:35:39.119414] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:47.587 nvme0n1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.587 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.587 [ 00:29:47.587 { 00:29:47.587 "name": "nvme0n1", 00:29:47.587 "aliases": [ 00:29:47.587 "cfecd420-a2a5-467f-8c59-ed386acb8592" 00:29:47.587 ], 00:29:47.587 "product_name": "NVMe disk", 00:29:47.587 "block_size": 512, 00:29:47.587 "num_blocks": 2097152, 00:29:47.587 "uuid": "cfecd420-a2a5-467f-8c59-ed386acb8592", 00:29:47.587 "numa_id": 1, 00:29:47.587 "assigned_rate_limits": { 00:29:47.587 "rw_ios_per_sec": 0, 00:29:47.587 "rw_mbytes_per_sec": 0, 00:29:47.587 "r_mbytes_per_sec": 0, 00:29:47.587 "w_mbytes_per_sec": 0 00:29:47.587 }, 00:29:47.587 "claimed": false, 00:29:47.587 "zoned": false, 00:29:47.587 "supported_io_types": { 00:29:47.587 "read": true, 00:29:47.587 "write": true, 00:29:47.587 "unmap": false, 00:29:47.587 "flush": true, 00:29:47.587 "reset": true, 00:29:47.587 "nvme_admin": true, 00:29:47.587 "nvme_io": true, 00:29:47.587 "nvme_io_md": false, 00:29:47.587 "write_zeroes": true, 00:29:47.587 "zcopy": false, 00:29:47.587 "get_zone_info": false, 00:29:47.587 "zone_management": false, 00:29:47.587 "zone_append": false, 00:29:47.587 "compare": true, 00:29:47.587 "compare_and_write": true, 00:29:47.587 "abort": true, 00:29:47.587 "seek_hole": false, 00:29:47.587 "seek_data": false, 00:29:47.587 "copy": true, 00:29:47.587 "nvme_iov_md": false 00:29:47.587 }, 00:29:47.587 "memory_domains": [ 00:29:47.587 { 00:29:47.587 "dma_device_id": "system", 00:29:47.587 "dma_device_type": 1 00:29:47.587 } 00:29:47.587 ], 00:29:47.587 "driver_specific": { 00:29:47.587 "nvme": [ 00:29:47.587 { 00:29:47.587 "trid": { 00:29:47.587 "trtype": "TCP", 00:29:47.587 "adrfam": "IPv4", 00:29:47.587 "traddr": "10.0.0.2", 00:29:47.587 "trsvcid": "4421", 00:29:47.587 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:47.587 }, 00:29:47.587 "ctrlr_data": { 00:29:47.587 "cntlid": 3, 00:29:47.587 "vendor_id": "0x8086", 00:29:47.587 "model_number": "SPDK bdev Controller", 00:29:47.587 "serial_number": "00000000000000000000", 00:29:47.587 "firmware_revision": "25.01", 00:29:47.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:47.587 "oacs": { 00:29:47.587 "security": 0, 00:29:47.587 "format": 0, 00:29:47.587 "firmware": 0, 00:29:47.587 "ns_manage": 0 00:29:47.587 }, 00:29:47.587 "multi_ctrlr": true, 00:29:47.587 "ana_reporting": false 00:29:47.587 }, 00:29:47.587 "vs": { 00:29:47.587 "nvme_version": "1.3" 00:29:47.587 }, 00:29:47.587 "ns_data": { 00:29:47.587 "id": 1, 00:29:47.587 "can_share": true 00:29:47.587 } 00:29:47.587 } 00:29:47.587 ], 00:29:47.587 "mp_policy": "active_passive" 00:29:47.587 } 00:29:47.587 } 00:29:47.588 ] 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.4CYPmiEnr1 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:47.588 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:47.847 rmmod nvme_tcp 00:29:47.847 rmmod nvme_fabrics 00:29:47.847 rmmod nvme_keyring 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 1112856 ']' 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1112856 ']' 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1112856' 00:29:47.847 killing process with pid 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1112856 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:47.847 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.106 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.106 06:35:39 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:50.010 00:29:50.010 real 0m9.375s 00:29:50.010 user 0m3.127s 00:29:50.010 sys 0m4.652s 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:29:50.010 ************************************ 00:29:50.010 END TEST nvmf_async_init 00:29:50.010 ************************************ 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.010 ************************************ 00:29:50.010 START TEST dma 00:29:50.010 ************************************ 00:29:50.010 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:29:50.269 * Looking for test storage... 00:29:50.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.269 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.270 --rc genhtml_branch_coverage=1 00:29:50.270 --rc genhtml_function_coverage=1 00:29:50.270 --rc genhtml_legend=1 00:29:50.270 --rc geninfo_all_blocks=1 00:29:50.270 --rc geninfo_unexecuted_blocks=1 00:29:50.270 00:29:50.270 ' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.270 --rc genhtml_branch_coverage=1 00:29:50.270 --rc genhtml_function_coverage=1 00:29:50.270 --rc genhtml_legend=1 00:29:50.270 --rc geninfo_all_blocks=1 00:29:50.270 --rc geninfo_unexecuted_blocks=1 00:29:50.270 00:29:50.270 ' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.270 --rc genhtml_branch_coverage=1 00:29:50.270 --rc genhtml_function_coverage=1 00:29:50.270 --rc genhtml_legend=1 00:29:50.270 --rc geninfo_all_blocks=1 00:29:50.270 --rc geninfo_unexecuted_blocks=1 00:29:50.270 00:29:50.270 ' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.270 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.270 --rc genhtml_branch_coverage=1 00:29:50.270 --rc genhtml_function_coverage=1 00:29:50.270 --rc genhtml_legend=1 00:29:50.270 --rc geninfo_all_blocks=1 00:29:50.270 --rc geninfo_unexecuted_blocks=1 00:29:50.270 00:29:50.270 ' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:29:50.270 00:29:50.270 real 0m0.212s 00:29:50.270 user 0m0.134s 00:29:50.270 sys 0m0.093s 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:29:50.270 ************************************ 00:29:50.270 END TEST dma 00:29:50.270 ************************************ 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:50.270 ************************************ 00:29:50.270 START TEST nvmf_identify 00:29:50.270 ************************************ 00:29:50.270 06:35:41 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:29:50.530 * Looking for test storage... 00:29:50.530 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.531 --rc genhtml_branch_coverage=1 00:29:50.531 --rc genhtml_function_coverage=1 00:29:50.531 --rc genhtml_legend=1 00:29:50.531 --rc geninfo_all_blocks=1 00:29:50.531 --rc geninfo_unexecuted_blocks=1 00:29:50.531 00:29:50.531 ' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.531 --rc genhtml_branch_coverage=1 00:29:50.531 --rc genhtml_function_coverage=1 00:29:50.531 --rc genhtml_legend=1 00:29:50.531 --rc geninfo_all_blocks=1 00:29:50.531 --rc geninfo_unexecuted_blocks=1 00:29:50.531 00:29:50.531 ' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.531 --rc genhtml_branch_coverage=1 00:29:50.531 --rc genhtml_function_coverage=1 00:29:50.531 --rc genhtml_legend=1 00:29:50.531 --rc geninfo_all_blocks=1 00:29:50.531 --rc geninfo_unexecuted_blocks=1 00:29:50.531 00:29:50.531 ' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:50.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:50.531 --rc genhtml_branch_coverage=1 00:29:50.531 --rc genhtml_function_coverage=1 00:29:50.531 --rc genhtml_legend=1 00:29:50.531 --rc geninfo_all_blocks=1 00:29:50.531 --rc geninfo_unexecuted_blocks=1 00:29:50.531 00:29:50.531 ' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:50.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:50.531 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:29:50.532 06:35:42 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:57.102 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:57.102 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.102 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:57.103 Found net devices under 0000:af:00.0: cvl_0_0 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:57.103 Found net devices under 0000:af:00.1: cvl_0_1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:57.103 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:57.103 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:29:57.103 00:29:57.103 --- 10.0.0.2 ping statistics --- 00:29:57.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.103 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:57.103 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:57.103 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:29:57.103 00:29:57.103 --- 10.0.0.1 ping statistics --- 00:29:57.103 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:57.103 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1116599 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1116599 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1116599 ']' 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:57.103 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:57.103 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 [2024-12-13 06:35:48.012023] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:57.103 [2024-12-13 06:35:48.012066] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:57.103 [2024-12-13 06:35:48.089216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:57.103 [2024-12-13 06:35:48.113092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:57.103 [2024-12-13 06:35:48.113132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:57.103 [2024-12-13 06:35:48.113139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:57.103 [2024-12-13 06:35:48.113145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:57.103 [2024-12-13 06:35:48.113150] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:57.103 [2024-12-13 06:35:48.114443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.103 [2024-12-13 06:35:48.114555] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:57.103 [2024-12-13 06:35:48.114588] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:57.103 [2024-12-13 06:35:48.114590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 [2024-12-13 06:35:48.207193] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 Malloc0 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.103 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.104 [2024-12-13 06:35:48.310374] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.104 [ 00:29:57.104 { 00:29:57.104 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:57.104 "subtype": "Discovery", 00:29:57.104 "listen_addresses": [ 00:29:57.104 { 00:29:57.104 "trtype": "TCP", 00:29:57.104 "adrfam": "IPv4", 00:29:57.104 "traddr": "10.0.0.2", 00:29:57.104 "trsvcid": "4420" 00:29:57.104 } 00:29:57.104 ], 00:29:57.104 "allow_any_host": true, 00:29:57.104 "hosts": [] 00:29:57.104 }, 00:29:57.104 { 00:29:57.104 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:57.104 "subtype": "NVMe", 00:29:57.104 "listen_addresses": [ 00:29:57.104 { 00:29:57.104 "trtype": "TCP", 00:29:57.104 "adrfam": "IPv4", 00:29:57.104 "traddr": "10.0.0.2", 00:29:57.104 "trsvcid": "4420" 00:29:57.104 } 00:29:57.104 ], 00:29:57.104 "allow_any_host": true, 00:29:57.104 "hosts": [], 00:29:57.104 "serial_number": "SPDK00000000000001", 00:29:57.104 "model_number": "SPDK bdev Controller", 00:29:57.104 "max_namespaces": 32, 00:29:57.104 "min_cntlid": 1, 00:29:57.104 "max_cntlid": 65519, 00:29:57.104 "namespaces": [ 00:29:57.104 { 00:29:57.104 "nsid": 1, 00:29:57.104 "bdev_name": "Malloc0", 00:29:57.104 "name": "Malloc0", 00:29:57.104 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:29:57.104 "eui64": "ABCDEF0123456789", 00:29:57.104 "uuid": "c4548118-1c6f-47ad-a160-df4bc7283c10" 00:29:57.104 } 00:29:57.104 ] 00:29:57.104 } 00:29:57.104 ] 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.104 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:29:57.104 [2024-12-13 06:35:48.367289] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:57.104 [2024-12-13 06:35:48.367335] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116634 ] 00:29:57.104 [2024-12-13 06:35:48.410804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:29:57.104 [2024-12-13 06:35:48.410849] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:57.104 [2024-12-13 06:35:48.410854] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:57.104 [2024-12-13 06:35:48.410865] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:57.104 [2024-12-13 06:35:48.410874] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:57.104 [2024-12-13 06:35:48.411362] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:29:57.104 [2024-12-13 06:35:48.411396] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x2232ed0 0 00:29:57.104 [2024-12-13 06:35:48.417462] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:57.104 [2024-12-13 06:35:48.417474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:57.104 [2024-12-13 06:35:48.417478] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:57.104 [2024-12-13 06:35:48.417482] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:57.104 [2024-12-13 06:35:48.417514] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.417520] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.417523] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.417535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:57.104 [2024-12-13 06:35:48.417552] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.104 [2024-12-13 06:35:48.424457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.104 [2024-12-13 06:35:48.424466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.104 [2024-12-13 06:35:48.424469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.104 [2024-12-13 06:35:48.424482] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:57.104 [2024-12-13 06:35:48.424488] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:29:57.104 [2024-12-13 06:35:48.424493] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:29:57.104 [2024-12-13 06:35:48.424503] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424507] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.424517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.104 [2024-12-13 06:35:48.424531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.104 [2024-12-13 06:35:48.424703] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.104 [2024-12-13 06:35:48.424709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.104 [2024-12-13 06:35:48.424712] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.104 [2024-12-13 06:35:48.424720] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:29:57.104 [2024-12-13 06:35:48.424726] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:29:57.104 [2024-12-13 06:35:48.424732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.424744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.104 [2024-12-13 06:35:48.424755] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.104 [2024-12-13 06:35:48.424850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.104 [2024-12-13 06:35:48.424856] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.104 [2024-12-13 06:35:48.424859] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424862] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.104 [2024-12-13 06:35:48.424866] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:29:57.104 [2024-12-13 06:35:48.424873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:57.104 [2024-12-13 06:35:48.424879] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424885] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424888] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.424893] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.104 [2024-12-13 06:35:48.424903] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.104 [2024-12-13 06:35:48.424964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.104 [2024-12-13 06:35:48.424970] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.104 [2024-12-13 06:35:48.424973] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.104 [2024-12-13 06:35:48.424980] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:57.104 [2024-12-13 06:35:48.424988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424992] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.424995] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.425000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.104 [2024-12-13 06:35:48.425010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.104 [2024-12-13 06:35:48.425100] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.104 [2024-12-13 06:35:48.425105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.104 [2024-12-13 06:35:48.425109] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.425112] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.104 [2024-12-13 06:35:48.425116] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:57.104 [2024-12-13 06:35:48.425120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:57.104 [2024-12-13 06:35:48.425126] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:57.104 [2024-12-13 06:35:48.425234] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:29:57.104 [2024-12-13 06:35:48.425238] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:57.104 [2024-12-13 06:35:48.425246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.425249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.104 [2024-12-13 06:35:48.425252] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.104 [2024-12-13 06:35:48.425258] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.105 [2024-12-13 06:35:48.425267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.105 [2024-12-13 06:35:48.425332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.425338] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.425341] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425344] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.425348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:57.105 [2024-12-13 06:35:48.425358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425361] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425370] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.105 [2024-12-13 06:35:48.425379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.105 [2024-12-13 06:35:48.425486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.425492] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.425495] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425498] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.425502] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:57.105 [2024-12-13 06:35:48.425506] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425513] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:29:57.105 [2024-12-13 06:35:48.425523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425534] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.105 [2024-12-13 06:35:48.425550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.105 [2024-12-13 06:35:48.425631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.105 [2024-12-13 06:35:48.425636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.105 [2024-12-13 06:35:48.425640] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425643] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2232ed0): datao=0, datal=4096, cccid=0 00:29:57.105 [2024-12-13 06:35:48.425647] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229e540) on tqpair(0x2232ed0): expected_datao=0, payload_size=4096 00:29:57.105 [2024-12-13 06:35:48.425652] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425669] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425673] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.425741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.425744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.425753] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:29:57.105 [2024-12-13 06:35:48.425758] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:29:57.105 [2024-12-13 06:35:48.425761] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:29:57.105 [2024-12-13 06:35:48.425768] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:29:57.105 [2024-12-13 06:35:48.425772] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:29:57.105 [2024-12-13 06:35:48.425776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425788] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425796] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425799] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425802] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.105 [2024-12-13 06:35:48.425818] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.105 [2024-12-13 06:35:48.425880] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.425886] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.425889] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425892] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.425899] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425902] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425905] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.105 [2024-12-13 06:35:48.425915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425918] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425921] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425926] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.105 [2024-12-13 06:35:48.425931] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425934] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425937] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425942] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.105 [2024-12-13 06:35:48.425947] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425953] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.105 [2024-12-13 06:35:48.425962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:57.105 [2024-12-13 06:35:48.425978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.425981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.425988] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.105 [2024-12-13 06:35:48.425999] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e540, cid 0, qid 0 00:29:57.105 [2024-12-13 06:35:48.426004] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e6c0, cid 1, qid 0 00:29:57.105 [2024-12-13 06:35:48.426008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e840, cid 2, qid 0 00:29:57.105 [2024-12-13 06:35:48.426012] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.105 [2024-12-13 06:35:48.426016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229eb40, cid 4, qid 0 00:29:57.105 [2024-12-13 06:35:48.426131] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.426137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.426140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229eb40) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.426147] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:29:57.105 [2024-12-13 06:35:48.426151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:29:57.105 [2024-12-13 06:35:48.426160] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426164] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2232ed0) 00:29:57.105 [2024-12-13 06:35:48.426169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.105 [2024-12-13 06:35:48.426178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229eb40, cid 4, qid 0 00:29:57.105 [2024-12-13 06:35:48.426251] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.105 [2024-12-13 06:35:48.426256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.105 [2024-12-13 06:35:48.426259] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426262] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2232ed0): datao=0, datal=4096, cccid=4 00:29:57.105 [2024-12-13 06:35:48.426266] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229eb40) on tqpair(0x2232ed0): expected_datao=0, payload_size=4096 00:29:57.105 [2024-12-13 06:35:48.426270] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426276] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426279] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426332] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.105 [2024-12-13 06:35:48.426337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.105 [2024-12-13 06:35:48.426340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.105 [2024-12-13 06:35:48.426343] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229eb40) on tqpair=0x2232ed0 00:29:57.105 [2024-12-13 06:35:48.426353] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:29:57.105 [2024-12-13 06:35:48.426375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2232ed0) 00:29:57.106 [2024-12-13 06:35:48.426385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.106 [2024-12-13 06:35:48.426391] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426394] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426399] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x2232ed0) 00:29:57.106 [2024-12-13 06:35:48.426404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.106 [2024-12-13 06:35:48.426416] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229eb40, cid 4, qid 0 00:29:57.106 [2024-12-13 06:35:48.426421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229ecc0, cid 5, qid 0 00:29:57.106 [2024-12-13 06:35:48.426539] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.106 [2024-12-13 06:35:48.426545] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.106 [2024-12-13 06:35:48.426548] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426551] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2232ed0): datao=0, datal=1024, cccid=4 00:29:57.106 [2024-12-13 06:35:48.426555] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229eb40) on tqpair(0x2232ed0): expected_datao=0, payload_size=1024 00:29:57.106 [2024-12-13 06:35:48.426559] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426565] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426568] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.106 [2024-12-13 06:35:48.426578] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.106 [2024-12-13 06:35:48.426581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.426584] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229ecc0) on tqpair=0x2232ed0 00:29:57.106 [2024-12-13 06:35:48.467625] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.106 [2024-12-13 06:35:48.467637] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.106 [2024-12-13 06:35:48.467641] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229eb40) on tqpair=0x2232ed0 00:29:57.106 [2024-12-13 06:35:48.467658] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2232ed0) 00:29:57.106 [2024-12-13 06:35:48.467669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.106 [2024-12-13 06:35:48.467684] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229eb40, cid 4, qid 0 00:29:57.106 [2024-12-13 06:35:48.467759] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.106 [2024-12-13 06:35:48.467765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.106 [2024-12-13 06:35:48.467768] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467771] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2232ed0): datao=0, datal=3072, cccid=4 00:29:57.106 [2024-12-13 06:35:48.467775] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229eb40) on tqpair(0x2232ed0): expected_datao=0, payload_size=3072 00:29:57.106 [2024-12-13 06:35:48.467779] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467806] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467810] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.106 [2024-12-13 06:35:48.467879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.106 [2024-12-13 06:35:48.467882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229eb40) on tqpair=0x2232ed0 00:29:57.106 [2024-12-13 06:35:48.467893] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.467899] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x2232ed0) 00:29:57.106 [2024-12-13 06:35:48.467905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.106 [2024-12-13 06:35:48.467918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229eb40, cid 4, qid 0 00:29:57.106 [2024-12-13 06:35:48.467988] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.106 [2024-12-13 06:35:48.467994] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.106 [2024-12-13 06:35:48.467997] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.468000] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x2232ed0): datao=0, datal=8, cccid=4 00:29:57.106 [2024-12-13 06:35:48.468004] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x229eb40) on tqpair(0x2232ed0): expected_datao=0, payload_size=8 00:29:57.106 [2024-12-13 06:35:48.468008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.468013] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.468017] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.509593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.106 [2024-12-13 06:35:48.509603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.106 [2024-12-13 06:35:48.509607] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.106 [2024-12-13 06:35:48.509610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229eb40) on tqpair=0x2232ed0 00:29:57.106 ===================================================== 00:29:57.106 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:29:57.106 ===================================================== 00:29:57.106 Controller Capabilities/Features 00:29:57.106 ================================ 00:29:57.106 Vendor ID: 0000 00:29:57.106 Subsystem Vendor ID: 0000 00:29:57.106 Serial Number: .................... 00:29:57.106 Model Number: ........................................ 00:29:57.106 Firmware Version: 25.01 00:29:57.106 Recommended Arb Burst: 0 00:29:57.106 IEEE OUI Identifier: 00 00 00 00:29:57.106 Multi-path I/O 00:29:57.106 May have multiple subsystem ports: No 00:29:57.106 May have multiple controllers: No 00:29:57.106 Associated with SR-IOV VF: No 00:29:57.106 Max Data Transfer Size: 131072 00:29:57.106 Max Number of Namespaces: 0 00:29:57.106 Max Number of I/O Queues: 1024 00:29:57.106 NVMe Specification Version (VS): 1.3 00:29:57.106 NVMe Specification Version (Identify): 1.3 00:29:57.106 Maximum Queue Entries: 128 00:29:57.106 Contiguous Queues Required: Yes 00:29:57.106 Arbitration Mechanisms Supported 00:29:57.106 Weighted Round Robin: Not Supported 00:29:57.106 Vendor Specific: Not Supported 00:29:57.106 Reset Timeout: 15000 ms 00:29:57.106 Doorbell Stride: 4 bytes 00:29:57.106 NVM Subsystem Reset: Not Supported 00:29:57.106 Command Sets Supported 00:29:57.106 NVM Command Set: Supported 00:29:57.106 Boot Partition: Not Supported 00:29:57.106 Memory Page Size Minimum: 4096 bytes 00:29:57.106 Memory Page Size Maximum: 4096 bytes 00:29:57.106 Persistent Memory Region: Not Supported 00:29:57.106 Optional Asynchronous Events Supported 00:29:57.106 Namespace Attribute Notices: Not Supported 00:29:57.106 Firmware Activation Notices: Not Supported 00:29:57.106 ANA Change Notices: Not Supported 00:29:57.106 PLE Aggregate Log Change Notices: Not Supported 00:29:57.106 LBA Status Info Alert Notices: Not Supported 00:29:57.106 EGE Aggregate Log Change Notices: Not Supported 00:29:57.106 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.106 Zone Descriptor Change Notices: Not Supported 00:29:57.106 Discovery Log Change Notices: Supported 00:29:57.106 Controller Attributes 00:29:57.106 128-bit Host Identifier: Not Supported 00:29:57.106 Non-Operational Permissive Mode: Not Supported 00:29:57.106 NVM Sets: Not Supported 00:29:57.106 Read Recovery Levels: Not Supported 00:29:57.106 Endurance Groups: Not Supported 00:29:57.106 Predictable Latency Mode: Not Supported 00:29:57.106 Traffic Based Keep ALive: Not Supported 00:29:57.106 Namespace Granularity: Not Supported 00:29:57.106 SQ Associations: Not Supported 00:29:57.106 UUID List: Not Supported 00:29:57.106 Multi-Domain Subsystem: Not Supported 00:29:57.106 Fixed Capacity Management: Not Supported 00:29:57.106 Variable Capacity Management: Not Supported 00:29:57.106 Delete Endurance Group: Not Supported 00:29:57.106 Delete NVM Set: Not Supported 00:29:57.106 Extended LBA Formats Supported: Not Supported 00:29:57.106 Flexible Data Placement Supported: Not Supported 00:29:57.106 00:29:57.106 Controller Memory Buffer Support 00:29:57.106 ================================ 00:29:57.106 Supported: No 00:29:57.106 00:29:57.106 Persistent Memory Region Support 00:29:57.106 ================================ 00:29:57.106 Supported: No 00:29:57.106 00:29:57.106 Admin Command Set Attributes 00:29:57.106 ============================ 00:29:57.106 Security Send/Receive: Not Supported 00:29:57.106 Format NVM: Not Supported 00:29:57.106 Firmware Activate/Download: Not Supported 00:29:57.106 Namespace Management: Not Supported 00:29:57.106 Device Self-Test: Not Supported 00:29:57.106 Directives: Not Supported 00:29:57.106 NVMe-MI: Not Supported 00:29:57.106 Virtualization Management: Not Supported 00:29:57.106 Doorbell Buffer Config: Not Supported 00:29:57.106 Get LBA Status Capability: Not Supported 00:29:57.107 Command & Feature Lockdown Capability: Not Supported 00:29:57.107 Abort Command Limit: 1 00:29:57.107 Async Event Request Limit: 4 00:29:57.107 Number of Firmware Slots: N/A 00:29:57.107 Firmware Slot 1 Read-Only: N/A 00:29:57.107 Firmware Activation Without Reset: N/A 00:29:57.107 Multiple Update Detection Support: N/A 00:29:57.107 Firmware Update Granularity: No Information Provided 00:29:57.107 Per-Namespace SMART Log: No 00:29:57.107 Asymmetric Namespace Access Log Page: Not Supported 00:29:57.107 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:29:57.107 Command Effects Log Page: Not Supported 00:29:57.107 Get Log Page Extended Data: Supported 00:29:57.107 Telemetry Log Pages: Not Supported 00:29:57.107 Persistent Event Log Pages: Not Supported 00:29:57.107 Supported Log Pages Log Page: May Support 00:29:57.107 Commands Supported & Effects Log Page: Not Supported 00:29:57.107 Feature Identifiers & Effects Log Page:May Support 00:29:57.107 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.107 Data Area 4 for Telemetry Log: Not Supported 00:29:57.107 Error Log Page Entries Supported: 128 00:29:57.107 Keep Alive: Not Supported 00:29:57.107 00:29:57.107 NVM Command Set Attributes 00:29:57.107 ========================== 00:29:57.107 Submission Queue Entry Size 00:29:57.107 Max: 1 00:29:57.107 Min: 1 00:29:57.107 Completion Queue Entry Size 00:29:57.107 Max: 1 00:29:57.107 Min: 1 00:29:57.107 Number of Namespaces: 0 00:29:57.107 Compare Command: Not Supported 00:29:57.107 Write Uncorrectable Command: Not Supported 00:29:57.107 Dataset Management Command: Not Supported 00:29:57.107 Write Zeroes Command: Not Supported 00:29:57.107 Set Features Save Field: Not Supported 00:29:57.107 Reservations: Not Supported 00:29:57.107 Timestamp: Not Supported 00:29:57.107 Copy: Not Supported 00:29:57.107 Volatile Write Cache: Not Present 00:29:57.107 Atomic Write Unit (Normal): 1 00:29:57.107 Atomic Write Unit (PFail): 1 00:29:57.107 Atomic Compare & Write Unit: 1 00:29:57.107 Fused Compare & Write: Supported 00:29:57.107 Scatter-Gather List 00:29:57.107 SGL Command Set: Supported 00:29:57.107 SGL Keyed: Supported 00:29:57.107 SGL Bit Bucket Descriptor: Not Supported 00:29:57.107 SGL Metadata Pointer: Not Supported 00:29:57.107 Oversized SGL: Not Supported 00:29:57.107 SGL Metadata Address: Not Supported 00:29:57.107 SGL Offset: Supported 00:29:57.107 Transport SGL Data Block: Not Supported 00:29:57.107 Replay Protected Memory Block: Not Supported 00:29:57.107 00:29:57.107 Firmware Slot Information 00:29:57.107 ========================= 00:29:57.107 Active slot: 0 00:29:57.107 00:29:57.107 00:29:57.107 Error Log 00:29:57.107 ========= 00:29:57.107 00:29:57.107 Active Namespaces 00:29:57.107 ================= 00:29:57.107 Discovery Log Page 00:29:57.107 ================== 00:29:57.107 Generation Counter: 2 00:29:57.107 Number of Records: 2 00:29:57.107 Record Format: 0 00:29:57.107 00:29:57.107 Discovery Log Entry 0 00:29:57.107 ---------------------- 00:29:57.107 Transport Type: 3 (TCP) 00:29:57.107 Address Family: 1 (IPv4) 00:29:57.107 Subsystem Type: 3 (Current Discovery Subsystem) 00:29:57.107 Entry Flags: 00:29:57.107 Duplicate Returned Information: 1 00:29:57.107 Explicit Persistent Connection Support for Discovery: 1 00:29:57.107 Transport Requirements: 00:29:57.107 Secure Channel: Not Required 00:29:57.107 Port ID: 0 (0x0000) 00:29:57.107 Controller ID: 65535 (0xffff) 00:29:57.107 Admin Max SQ Size: 128 00:29:57.107 Transport Service Identifier: 4420 00:29:57.107 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:29:57.107 Transport Address: 10.0.0.2 00:29:57.107 Discovery Log Entry 1 00:29:57.107 ---------------------- 00:29:57.107 Transport Type: 3 (TCP) 00:29:57.107 Address Family: 1 (IPv4) 00:29:57.107 Subsystem Type: 2 (NVM Subsystem) 00:29:57.107 Entry Flags: 00:29:57.107 Duplicate Returned Information: 0 00:29:57.107 Explicit Persistent Connection Support for Discovery: 0 00:29:57.107 Transport Requirements: 00:29:57.107 Secure Channel: Not Required 00:29:57.107 Port ID: 0 (0x0000) 00:29:57.107 Controller ID: 65535 (0xffff) 00:29:57.107 Admin Max SQ Size: 128 00:29:57.107 Transport Service Identifier: 4420 00:29:57.107 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:29:57.107 Transport Address: 10.0.0.2 [2024-12-13 06:35:48.509687] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:29:57.107 [2024-12-13 06:35:48.509698] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e540) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.107 [2024-12-13 06:35:48.509709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e6c0) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.107 [2024-12-13 06:35:48.509718] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e840) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.107 [2024-12-13 06:35:48.509726] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.107 [2024-12-13 06:35:48.509738] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509742] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509745] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.107 [2024-12-13 06:35:48.509752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.107 [2024-12-13 06:35:48.509765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.107 [2024-12-13 06:35:48.509832] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.107 [2024-12-13 06:35:48.509838] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.107 [2024-12-13 06:35:48.509841] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509845] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509851] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509856] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509859] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.107 [2024-12-13 06:35:48.509865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.107 [2024-12-13 06:35:48.509878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.107 [2024-12-13 06:35:48.509981] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.107 [2024-12-13 06:35:48.509987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.107 [2024-12-13 06:35:48.509990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.509993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.509998] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:29:57.107 [2024-12-13 06:35:48.510002] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:29:57.107 [2024-12-13 06:35:48.510010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.510014] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.510017] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.107 [2024-12-13 06:35:48.510023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.107 [2024-12-13 06:35:48.510032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.107 [2024-12-13 06:35:48.510092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.107 [2024-12-13 06:35:48.510098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.107 [2024-12-13 06:35:48.510101] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.510105] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.107 [2024-12-13 06:35:48.510113] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.510117] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.107 [2024-12-13 06:35:48.510120] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.108 [2024-12-13 06:35:48.510126] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.510135] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.108 [2024-12-13 06:35:48.510234] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.510240] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.510243] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510247] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.108 [2024-12-13 06:35:48.510255] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510258] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510261] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.108 [2024-12-13 06:35:48.510267] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.510276] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.108 [2024-12-13 06:35:48.510385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.510391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.510394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510399] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.108 [2024-12-13 06:35:48.510408] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.510415] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.108 [2024-12-13 06:35:48.510420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.510430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.108 [2024-12-13 06:35:48.514460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.514468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.514471] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.514474] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.108 [2024-12-13 06:35:48.514482] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.514486] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.514489] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x2232ed0) 00:29:57.108 [2024-12-13 06:35:48.514495] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.514506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x229e9c0, cid 3, qid 0 00:29:57.108 [2024-12-13 06:35:48.514653] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.514659] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.514662] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.514665] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x229e9c0) on tqpair=0x2232ed0 00:29:57.108 [2024-12-13 06:35:48.514672] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 4 milliseconds 00:29:57.108 00:29:57.108 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:29:57.108 [2024-12-13 06:35:48.550835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:29:57.108 [2024-12-13 06:35:48.550868] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1116638 ] 00:29:57.108 [2024-12-13 06:35:48.588151] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:29:57.108 [2024-12-13 06:35:48.588191] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:29:57.108 [2024-12-13 06:35:48.588196] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:29:57.108 [2024-12-13 06:35:48.588206] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:29:57.108 [2024-12-13 06:35:48.588213] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:29:57.108 [2024-12-13 06:35:48.595591] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:29:57.108 [2024-12-13 06:35:48.595621] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17cced0 0 00:29:57.108 [2024-12-13 06:35:48.595785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:29:57.108 [2024-12-13 06:35:48.595792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:29:57.108 [2024-12-13 06:35:48.595795] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:29:57.108 [2024-12-13 06:35:48.595798] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:29:57.108 [2024-12-13 06:35:48.595819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.595824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.595827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.108 [2024-12-13 06:35:48.595837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:57.108 [2024-12-13 06:35:48.595849] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.108 [2024-12-13 06:35:48.603457] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.603465] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.603468] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.108 [2024-12-13 06:35:48.603480] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:29:57.108 [2024-12-13 06:35:48.603486] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:29:57.108 [2024-12-13 06:35:48.603490] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:29:57.108 [2024-12-13 06:35:48.603499] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603503] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.108 [2024-12-13 06:35:48.603513] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.603525] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.108 [2024-12-13 06:35:48.603684] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.603690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.603693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.108 [2024-12-13 06:35:48.603701] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:29:57.108 [2024-12-13 06:35:48.603707] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:29:57.108 [2024-12-13 06:35:48.603713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603717] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603720] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.108 [2024-12-13 06:35:48.603725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.603735] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.108 [2024-12-13 06:35:48.603799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.603805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.603808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.108 [2024-12-13 06:35:48.603818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:29:57.108 [2024-12-13 06:35:48.603825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:29:57.108 [2024-12-13 06:35:48.603830] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603837] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.108 [2024-12-13 06:35:48.603842] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.603852] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.108 [2024-12-13 06:35:48.603914] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.603920] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.603923] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603926] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.108 [2024-12-13 06:35:48.603930] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:29:57.108 [2024-12-13 06:35:48.603938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.603945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.108 [2024-12-13 06:35:48.603950] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.108 [2024-12-13 06:35:48.603960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.108 [2024-12-13 06:35:48.604032] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.108 [2024-12-13 06:35:48.604038] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.108 [2024-12-13 06:35:48.604041] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.108 [2024-12-13 06:35:48.604044] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.108 [2024-12-13 06:35:48.604048] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:29:57.109 [2024-12-13 06:35:48.604052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:29:57.109 [2024-12-13 06:35:48.604059] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:29:57.109 [2024-12-13 06:35:48.604166] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:29:57.109 [2024-12-13 06:35:48.604170] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:29:57.109 [2024-12-13 06:35:48.604177] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604180] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604183] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.604188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.109 [2024-12-13 06:35:48.604198] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.109 [2024-12-13 06:35:48.604261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.604267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.604271] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604275] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.109 [2024-12-13 06:35:48.604279] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:29:57.109 [2024-12-13 06:35:48.604288] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604291] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604294] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.604300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.109 [2024-12-13 06:35:48.604309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.109 [2024-12-13 06:35:48.604379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.604385] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.604388] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.109 [2024-12-13 06:35:48.604395] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:29:57.109 [2024-12-13 06:35:48.604399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.604405] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:29:57.109 [2024-12-13 06:35:48.604412] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.604419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.604428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.109 [2024-12-13 06:35:48.604438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.109 [2024-12-13 06:35:48.604535] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.109 [2024-12-13 06:35:48.604542] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.109 [2024-12-13 06:35:48.604545] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604548] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=4096, cccid=0 00:29:57.109 [2024-12-13 06:35:48.604552] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838540) on tqpair(0x17cced0): expected_datao=0, payload_size=4096 00:29:57.109 [2024-12-13 06:35:48.604556] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604567] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.604570] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646640] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.646652] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.646655] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646659] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.109 [2024-12-13 06:35:48.646666] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:29:57.109 [2024-12-13 06:35:48.646671] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:29:57.109 [2024-12-13 06:35:48.646678] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:29:57.109 [2024-12-13 06:35:48.646681] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:29:57.109 [2024-12-13 06:35:48.646685] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:29:57.109 [2024-12-13 06:35:48.646690] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.646703] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.646711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646719] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646726] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.109 [2024-12-13 06:35:48.646738] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.109 [2024-12-13 06:35:48.646800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.646806] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.646809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646813] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.109 [2024-12-13 06:35:48.646818] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.109 [2024-12-13 06:35:48.646835] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.109 [2024-12-13 06:35:48.646852] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646855] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646858] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646863] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.109 [2024-12-13 06:35:48.646868] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646874] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.109 [2024-12-13 06:35:48.646883] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.646892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.646898] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.646903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.646908] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.109 [2024-12-13 06:35:48.646920] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838540, cid 0, qid 0 00:29:57.109 [2024-12-13 06:35:48.646924] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18386c0, cid 1, qid 0 00:29:57.109 [2024-12-13 06:35:48.646928] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838840, cid 2, qid 0 00:29:57.109 [2024-12-13 06:35:48.646932] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.109 [2024-12-13 06:35:48.646936] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.109 [2024-12-13 06:35:48.647053] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.647059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.647062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.647065] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.109 [2024-12-13 06:35:48.647069] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:29:57.109 [2024-12-13 06:35:48.647073] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.647082] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.647088] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:29:57.109 [2024-12-13 06:35:48.647093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.647097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.109 [2024-12-13 06:35:48.647100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.109 [2024-12-13 06:35:48.647105] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:29:57.109 [2024-12-13 06:35:48.647115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.109 [2024-12-13 06:35:48.647203] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.109 [2024-12-13 06:35:48.647208] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.109 [2024-12-13 06:35:48.647211] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.647214] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.110 [2024-12-13 06:35:48.647264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.647273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.647279] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.647282] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.110 [2024-12-13 06:35:48.647287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.110 [2024-12-13 06:35:48.647297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.110 [2024-12-13 06:35:48.647371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.110 [2024-12-13 06:35:48.647377] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.110 [2024-12-13 06:35:48.647382] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.647385] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=4096, cccid=4 00:29:57.110 [2024-12-13 06:35:48.647389] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838b40) on tqpair(0x17cced0): expected_datao=0, payload_size=4096 00:29:57.110 [2024-12-13 06:35:48.647393] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.647419] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.647423] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691454] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.110 [2024-12-13 06:35:48.691464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.110 [2024-12-13 06:35:48.691467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.110 [2024-12-13 06:35:48.691481] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:29:57.110 [2024-12-13 06:35:48.691492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.691502] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.691508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.110 [2024-12-13 06:35:48.691517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.110 [2024-12-13 06:35:48.691529] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.110 [2024-12-13 06:35:48.691702] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.110 [2024-12-13 06:35:48.691708] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.110 [2024-12-13 06:35:48.691711] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691714] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=4096, cccid=4 00:29:57.110 [2024-12-13 06:35:48.691718] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838b40) on tqpair(0x17cced0): expected_datao=0, payload_size=4096 00:29:57.110 [2024-12-13 06:35:48.691721] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691732] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.691735] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737455] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.110 [2024-12-13 06:35:48.737464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.110 [2024-12-13 06:35:48.737467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.110 [2024-12-13 06:35:48.737483] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.737492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:29:57.110 [2024-12-13 06:35:48.737500] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737503] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.110 [2024-12-13 06:35:48.737509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.110 [2024-12-13 06:35:48.737523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.110 [2024-12-13 06:35:48.737681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.110 [2024-12-13 06:35:48.737687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.110 [2024-12-13 06:35:48.737690] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737693] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=4096, cccid=4 00:29:57.110 [2024-12-13 06:35:48.737697] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838b40) on tqpair(0x17cced0): expected_datao=0, payload_size=4096 00:29:57.110 [2024-12-13 06:35:48.737701] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737717] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.110 [2024-12-13 06:35:48.737720] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779583] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.779598] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.779602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.779615] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779623] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779631] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779637] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779641] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779646] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779650] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:29:57.371 [2024-12-13 06:35:48.779655] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:29:57.371 [2024-12-13 06:35:48.779659] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:29:57.371 [2024-12-13 06:35:48.779673] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779677] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.779684] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.779690] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779693] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779697] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.779702] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:29:57.371 [2024-12-13 06:35:48.779715] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.371 [2024-12-13 06:35:48.779721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838cc0, cid 5, qid 0 00:29:57.371 [2024-12-13 06:35:48.779838] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.779846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.779850] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779853] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.779859] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.779863] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.779867] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779870] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838cc0) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.779878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779881] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.779886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.779896] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838cc0, cid 5, qid 0 00:29:57.371 [2024-12-13 06:35:48.779959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.779965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.779968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838cc0) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.779979] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.779982] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.779987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.779996] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838cc0, cid 5, qid 0 00:29:57.371 [2024-12-13 06:35:48.780087] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.780092] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.780095] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780099] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838cc0) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.780107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.780115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.780124] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838cc0, cid 5, qid 0 00:29:57.371 [2024-12-13 06:35:48.780189] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.371 [2024-12-13 06:35:48.780195] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.371 [2024-12-13 06:35:48.780198] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780202] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838cc0) on tqpair=0x17cced0 00:29:57.371 [2024-12-13 06:35:48.780214] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780218] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.780223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.780229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780236] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.780242] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.780248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.780256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.780262] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17cced0) 00:29:57.371 [2024-12-13 06:35:48.780271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.371 [2024-12-13 06:35:48.780281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838cc0, cid 5, qid 0 00:29:57.371 [2024-12-13 06:35:48.780286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838b40, cid 4, qid 0 00:29:57.371 [2024-12-13 06:35:48.780290] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838e40, cid 6, qid 0 00:29:57.371 [2024-12-13 06:35:48.780294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838fc0, cid 7, qid 0 00:29:57.371 [2024-12-13 06:35:48.780426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.371 [2024-12-13 06:35:48.780432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.371 [2024-12-13 06:35:48.780435] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780439] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=8192, cccid=5 00:29:57.371 [2024-12-13 06:35:48.780443] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838cc0) on tqpair(0x17cced0): expected_datao=0, payload_size=8192 00:29:57.371 [2024-12-13 06:35:48.780446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780505] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780509] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.371 [2024-12-13 06:35:48.780514] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.371 [2024-12-13 06:35:48.780519] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.372 [2024-12-13 06:35:48.780522] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780525] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=512, cccid=4 00:29:57.372 [2024-12-13 06:35:48.780529] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838b40) on tqpair(0x17cced0): expected_datao=0, payload_size=512 00:29:57.372 [2024-12-13 06:35:48.780533] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780538] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780541] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780546] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.372 [2024-12-13 06:35:48.780551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.372 [2024-12-13 06:35:48.780554] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780557] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=512, cccid=6 00:29:57.372 [2024-12-13 06:35:48.780561] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838e40) on tqpair(0x17cced0): expected_datao=0, payload_size=512 00:29:57.372 [2024-12-13 06:35:48.780565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780570] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780574] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:29:57.372 [2024-12-13 06:35:48.780584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:29:57.372 [2024-12-13 06:35:48.780587] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780590] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17cced0): datao=0, datal=4096, cccid=7 00:29:57.372 [2024-12-13 06:35:48.780594] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1838fc0) on tqpair(0x17cced0): expected_datao=0, payload_size=4096 00:29:57.372 [2024-12-13 06:35:48.780598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780603] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780607] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.372 [2024-12-13 06:35:48.780619] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.372 [2024-12-13 06:35:48.780622] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780625] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838cc0) on tqpair=0x17cced0 00:29:57.372 [2024-12-13 06:35:48.780635] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.372 [2024-12-13 06:35:48.780640] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.372 [2024-12-13 06:35:48.780643] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780647] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838b40) on tqpair=0x17cced0 00:29:57.372 [2024-12-13 06:35:48.780655] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.372 [2024-12-13 06:35:48.780660] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.372 [2024-12-13 06:35:48.780663] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838e40) on tqpair=0x17cced0 00:29:57.372 [2024-12-13 06:35:48.780672] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.372 [2024-12-13 06:35:48.780677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.372 [2024-12-13 06:35:48.780680] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.372 [2024-12-13 06:35:48.780683] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838fc0) on tqpair=0x17cced0 00:29:57.372 ===================================================== 00:29:57.372 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:57.372 ===================================================== 00:29:57.372 Controller Capabilities/Features 00:29:57.372 ================================ 00:29:57.372 Vendor ID: 8086 00:29:57.372 Subsystem Vendor ID: 8086 00:29:57.372 Serial Number: SPDK00000000000001 00:29:57.372 Model Number: SPDK bdev Controller 00:29:57.372 Firmware Version: 25.01 00:29:57.372 Recommended Arb Burst: 6 00:29:57.372 IEEE OUI Identifier: e4 d2 5c 00:29:57.372 Multi-path I/O 00:29:57.372 May have multiple subsystem ports: Yes 00:29:57.372 May have multiple controllers: Yes 00:29:57.372 Associated with SR-IOV VF: No 00:29:57.372 Max Data Transfer Size: 131072 00:29:57.372 Max Number of Namespaces: 32 00:29:57.372 Max Number of I/O Queues: 127 00:29:57.372 NVMe Specification Version (VS): 1.3 00:29:57.372 NVMe Specification Version (Identify): 1.3 00:29:57.372 Maximum Queue Entries: 128 00:29:57.372 Contiguous Queues Required: Yes 00:29:57.372 Arbitration Mechanisms Supported 00:29:57.372 Weighted Round Robin: Not Supported 00:29:57.372 Vendor Specific: Not Supported 00:29:57.372 Reset Timeout: 15000 ms 00:29:57.372 Doorbell Stride: 4 bytes 00:29:57.372 NVM Subsystem Reset: Not Supported 00:29:57.372 Command Sets Supported 00:29:57.372 NVM Command Set: Supported 00:29:57.372 Boot Partition: Not Supported 00:29:57.372 Memory Page Size Minimum: 4096 bytes 00:29:57.372 Memory Page Size Maximum: 4096 bytes 00:29:57.372 Persistent Memory Region: Not Supported 00:29:57.372 Optional Asynchronous Events Supported 00:29:57.372 Namespace Attribute Notices: Supported 00:29:57.372 Firmware Activation Notices: Not Supported 00:29:57.372 ANA Change Notices: Not Supported 00:29:57.372 PLE Aggregate Log Change Notices: Not Supported 00:29:57.372 LBA Status Info Alert Notices: Not Supported 00:29:57.372 EGE Aggregate Log Change Notices: Not Supported 00:29:57.372 Normal NVM Subsystem Shutdown event: Not Supported 00:29:57.372 Zone Descriptor Change Notices: Not Supported 00:29:57.372 Discovery Log Change Notices: Not Supported 00:29:57.372 Controller Attributes 00:29:57.372 128-bit Host Identifier: Supported 00:29:57.372 Non-Operational Permissive Mode: Not Supported 00:29:57.372 NVM Sets: Not Supported 00:29:57.372 Read Recovery Levels: Not Supported 00:29:57.372 Endurance Groups: Not Supported 00:29:57.372 Predictable Latency Mode: Not Supported 00:29:57.372 Traffic Based Keep ALive: Not Supported 00:29:57.372 Namespace Granularity: Not Supported 00:29:57.372 SQ Associations: Not Supported 00:29:57.372 UUID List: Not Supported 00:29:57.372 Multi-Domain Subsystem: Not Supported 00:29:57.372 Fixed Capacity Management: Not Supported 00:29:57.372 Variable Capacity Management: Not Supported 00:29:57.372 Delete Endurance Group: Not Supported 00:29:57.372 Delete NVM Set: Not Supported 00:29:57.372 Extended LBA Formats Supported: Not Supported 00:29:57.372 Flexible Data Placement Supported: Not Supported 00:29:57.372 00:29:57.372 Controller Memory Buffer Support 00:29:57.372 ================================ 00:29:57.372 Supported: No 00:29:57.372 00:29:57.372 Persistent Memory Region Support 00:29:57.372 ================================ 00:29:57.372 Supported: No 00:29:57.372 00:29:57.372 Admin Command Set Attributes 00:29:57.372 ============================ 00:29:57.372 Security Send/Receive: Not Supported 00:29:57.372 Format NVM: Not Supported 00:29:57.372 Firmware Activate/Download: Not Supported 00:29:57.372 Namespace Management: Not Supported 00:29:57.372 Device Self-Test: Not Supported 00:29:57.372 Directives: Not Supported 00:29:57.372 NVMe-MI: Not Supported 00:29:57.372 Virtualization Management: Not Supported 00:29:57.372 Doorbell Buffer Config: Not Supported 00:29:57.372 Get LBA Status Capability: Not Supported 00:29:57.372 Command & Feature Lockdown Capability: Not Supported 00:29:57.372 Abort Command Limit: 4 00:29:57.372 Async Event Request Limit: 4 00:29:57.372 Number of Firmware Slots: N/A 00:29:57.372 Firmware Slot 1 Read-Only: N/A 00:29:57.372 Firmware Activation Without Reset: N/A 00:29:57.372 Multiple Update Detection Support: N/A 00:29:57.372 Firmware Update Granularity: No Information Provided 00:29:57.372 Per-Namespace SMART Log: No 00:29:57.372 Asymmetric Namespace Access Log Page: Not Supported 00:29:57.372 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:29:57.372 Command Effects Log Page: Supported 00:29:57.372 Get Log Page Extended Data: Supported 00:29:57.372 Telemetry Log Pages: Not Supported 00:29:57.372 Persistent Event Log Pages: Not Supported 00:29:57.372 Supported Log Pages Log Page: May Support 00:29:57.372 Commands Supported & Effects Log Page: Not Supported 00:29:57.372 Feature Identifiers & Effects Log Page:May Support 00:29:57.372 NVMe-MI Commands & Effects Log Page: May Support 00:29:57.372 Data Area 4 for Telemetry Log: Not Supported 00:29:57.372 Error Log Page Entries Supported: 128 00:29:57.372 Keep Alive: Supported 00:29:57.372 Keep Alive Granularity: 10000 ms 00:29:57.372 00:29:57.372 NVM Command Set Attributes 00:29:57.372 ========================== 00:29:57.372 Submission Queue Entry Size 00:29:57.372 Max: 64 00:29:57.372 Min: 64 00:29:57.372 Completion Queue Entry Size 00:29:57.372 Max: 16 00:29:57.372 Min: 16 00:29:57.372 Number of Namespaces: 32 00:29:57.372 Compare Command: Supported 00:29:57.372 Write Uncorrectable Command: Not Supported 00:29:57.372 Dataset Management Command: Supported 00:29:57.372 Write Zeroes Command: Supported 00:29:57.372 Set Features Save Field: Not Supported 00:29:57.372 Reservations: Supported 00:29:57.372 Timestamp: Not Supported 00:29:57.372 Copy: Supported 00:29:57.372 Volatile Write Cache: Present 00:29:57.372 Atomic Write Unit (Normal): 1 00:29:57.372 Atomic Write Unit (PFail): 1 00:29:57.372 Atomic Compare & Write Unit: 1 00:29:57.372 Fused Compare & Write: Supported 00:29:57.372 Scatter-Gather List 00:29:57.372 SGL Command Set: Supported 00:29:57.372 SGL Keyed: Supported 00:29:57.372 SGL Bit Bucket Descriptor: Not Supported 00:29:57.372 SGL Metadata Pointer: Not Supported 00:29:57.373 Oversized SGL: Not Supported 00:29:57.373 SGL Metadata Address: Not Supported 00:29:57.373 SGL Offset: Supported 00:29:57.373 Transport SGL Data Block: Not Supported 00:29:57.373 Replay Protected Memory Block: Not Supported 00:29:57.373 00:29:57.373 Firmware Slot Information 00:29:57.373 ========================= 00:29:57.373 Active slot: 1 00:29:57.373 Slot 1 Firmware Revision: 25.01 00:29:57.373 00:29:57.373 00:29:57.373 Commands Supported and Effects 00:29:57.373 ============================== 00:29:57.373 Admin Commands 00:29:57.373 -------------- 00:29:57.373 Get Log Page (02h): Supported 00:29:57.373 Identify (06h): Supported 00:29:57.373 Abort (08h): Supported 00:29:57.373 Set Features (09h): Supported 00:29:57.373 Get Features (0Ah): Supported 00:29:57.373 Asynchronous Event Request (0Ch): Supported 00:29:57.373 Keep Alive (18h): Supported 00:29:57.373 I/O Commands 00:29:57.373 ------------ 00:29:57.373 Flush (00h): Supported LBA-Change 00:29:57.373 Write (01h): Supported LBA-Change 00:29:57.373 Read (02h): Supported 00:29:57.373 Compare (05h): Supported 00:29:57.373 Write Zeroes (08h): Supported LBA-Change 00:29:57.373 Dataset Management (09h): Supported LBA-Change 00:29:57.373 Copy (19h): Supported LBA-Change 00:29:57.373 00:29:57.373 Error Log 00:29:57.373 ========= 00:29:57.373 00:29:57.373 Arbitration 00:29:57.373 =========== 00:29:57.373 Arbitration Burst: 1 00:29:57.373 00:29:57.373 Power Management 00:29:57.373 ================ 00:29:57.373 Number of Power States: 1 00:29:57.373 Current Power State: Power State #0 00:29:57.373 Power State #0: 00:29:57.373 Max Power: 0.00 W 00:29:57.373 Non-Operational State: Operational 00:29:57.373 Entry Latency: Not Reported 00:29:57.373 Exit Latency: Not Reported 00:29:57.373 Relative Read Throughput: 0 00:29:57.373 Relative Read Latency: 0 00:29:57.373 Relative Write Throughput: 0 00:29:57.373 Relative Write Latency: 0 00:29:57.373 Idle Power: Not Reported 00:29:57.373 Active Power: Not Reported 00:29:57.373 Non-Operational Permissive Mode: Not Supported 00:29:57.373 00:29:57.373 Health Information 00:29:57.373 ================== 00:29:57.373 Critical Warnings: 00:29:57.373 Available Spare Space: OK 00:29:57.373 Temperature: OK 00:29:57.373 Device Reliability: OK 00:29:57.373 Read Only: No 00:29:57.373 Volatile Memory Backup: OK 00:29:57.373 Current Temperature: 0 Kelvin (-273 Celsius) 00:29:57.373 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:29:57.373 Available Spare: 0% 00:29:57.373 Available Spare Threshold: 0% 00:29:57.373 Life Percentage Used:[2024-12-13 06:35:48.780762] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.780767] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.780772] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.780784] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1838fc0, cid 7, qid 0 00:29:57.373 [2024-12-13 06:35:48.780901] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.780906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.780909] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.780913] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838fc0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.780939] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:29:57.373 [2024-12-13 06:35:48.780947] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838540) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.780952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.373 [2024-12-13 06:35:48.780957] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18386c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.780964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.373 [2024-12-13 06:35:48.780968] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1838840) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.780972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.373 [2024-12-13 06:35:48.780976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.780980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.373 [2024-12-13 06:35:48.780987] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.780990] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.780993] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.780999] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.781010] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.373 [2024-12-13 06:35:48.781103] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.781108] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.781112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781115] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.781120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781124] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781127] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.781132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.781144] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.373 [2024-12-13 06:35:48.781216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.781222] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.781225] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.781232] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:29:57.373 [2024-12-13 06:35:48.781236] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:29:57.373 [2024-12-13 06:35:48.781244] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.781256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.781265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.373 [2024-12-13 06:35:48.781353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.781358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.781362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.781373] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781378] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.781382] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.781387] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.781396] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.373 [2024-12-13 06:35:48.785456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.785464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.785467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.785470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.785480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.785484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.785487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17cced0) 00:29:57.373 [2024-12-13 06:35:48.785492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.373 [2024-12-13 06:35:48.785503] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18389c0, cid 3, qid 0 00:29:57.373 [2024-12-13 06:35:48.785688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:29:57.373 [2024-12-13 06:35:48.785694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:29:57.373 [2024-12-13 06:35:48.785697] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:29:57.373 [2024-12-13 06:35:48.785700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18389c0) on tqpair=0x17cced0 00:29:57.373 [2024-12-13 06:35:48.785707] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:29:57.373 0% 00:29:57.373 Data Units Read: 0 00:29:57.373 Data Units Written: 0 00:29:57.373 Host Read Commands: 0 00:29:57.373 Host Write Commands: 0 00:29:57.373 Controller Busy Time: 0 minutes 00:29:57.373 Power Cycles: 0 00:29:57.373 Power On Hours: 0 hours 00:29:57.373 Unsafe Shutdowns: 0 00:29:57.373 Unrecoverable Media Errors: 0 00:29:57.373 Lifetime Error Log Entries: 0 00:29:57.373 Warning Temperature Time: 0 minutes 00:29:57.373 Critical Temperature Time: 0 minutes 00:29:57.373 00:29:57.373 Number of Queues 00:29:57.373 ================ 00:29:57.373 Number of I/O Submission Queues: 127 00:29:57.373 Number of I/O Completion Queues: 127 00:29:57.373 00:29:57.373 Active Namespaces 00:29:57.374 ================= 00:29:57.374 Namespace ID:1 00:29:57.374 Error Recovery Timeout: Unlimited 00:29:57.374 Command Set Identifier: NVM (00h) 00:29:57.374 Deallocate: Supported 00:29:57.374 Deallocated/Unwritten Error: Not Supported 00:29:57.374 Deallocated Read Value: Unknown 00:29:57.374 Deallocate in Write Zeroes: Not Supported 00:29:57.374 Deallocated Guard Field: 0xFFFF 00:29:57.374 Flush: Supported 00:29:57.374 Reservation: Supported 00:29:57.374 Namespace Sharing Capabilities: Multiple Controllers 00:29:57.374 Size (in LBAs): 131072 (0GiB) 00:29:57.374 Capacity (in LBAs): 131072 (0GiB) 00:29:57.374 Utilization (in LBAs): 131072 (0GiB) 00:29:57.374 NGUID: ABCDEF0123456789ABCDEF0123456789 00:29:57.374 EUI64: ABCDEF0123456789 00:29:57.374 UUID: c4548118-1c6f-47ad-a160-df4bc7283c10 00:29:57.374 Thin Provisioning: Not Supported 00:29:57.374 Per-NS Atomic Units: Yes 00:29:57.374 Atomic Boundary Size (Normal): 0 00:29:57.374 Atomic Boundary Size (PFail): 0 00:29:57.374 Atomic Boundary Offset: 0 00:29:57.374 Maximum Single Source Range Length: 65535 00:29:57.374 Maximum Copy Length: 65535 00:29:57.374 Maximum Source Range Count: 1 00:29:57.374 NGUID/EUI64 Never Reused: No 00:29:57.374 Namespace Write Protected: No 00:29:57.374 Number of LBA Formats: 1 00:29:57.374 Current LBA Format: LBA Format #00 00:29:57.374 LBA Format #00: Data Size: 512 Metadata Size: 0 00:29:57.374 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:57.374 rmmod nvme_tcp 00:29:57.374 rmmod nvme_fabrics 00:29:57.374 rmmod nvme_keyring 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 1116599 ']' 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 1116599 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1116599 ']' 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1116599 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1116599 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1116599' 00:29:57.374 killing process with pid 1116599 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1116599 00:29:57.374 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1116599 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.633 06:35:49 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:59.537 06:35:51 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:59.537 00:29:59.537 real 0m9.271s 00:29:59.537 user 0m5.670s 00:29:59.537 sys 0m4.815s 00:29:59.796 06:35:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:59.796 06:35:51 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:29:59.796 ************************************ 00:29:59.796 END TEST nvmf_identify 00:29:59.796 ************************************ 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:59.797 ************************************ 00:29:59.797 START TEST nvmf_perf 00:29:59.797 ************************************ 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:29:59.797 * Looking for test storage... 00:29:59.797 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.797 --rc genhtml_branch_coverage=1 00:29:59.797 --rc genhtml_function_coverage=1 00:29:59.797 --rc genhtml_legend=1 00:29:59.797 --rc geninfo_all_blocks=1 00:29:59.797 --rc geninfo_unexecuted_blocks=1 00:29:59.797 00:29:59.797 ' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.797 --rc genhtml_branch_coverage=1 00:29:59.797 --rc genhtml_function_coverage=1 00:29:59.797 --rc genhtml_legend=1 00:29:59.797 --rc geninfo_all_blocks=1 00:29:59.797 --rc geninfo_unexecuted_blocks=1 00:29:59.797 00:29:59.797 ' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.797 --rc genhtml_branch_coverage=1 00:29:59.797 --rc genhtml_function_coverage=1 00:29:59.797 --rc genhtml_legend=1 00:29:59.797 --rc geninfo_all_blocks=1 00:29:59.797 --rc geninfo_unexecuted_blocks=1 00:29:59.797 00:29:59.797 ' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:59.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:59.797 --rc genhtml_branch_coverage=1 00:29:59.797 --rc genhtml_function_coverage=1 00:29:59.797 --rc genhtml_legend=1 00:29:59.797 --rc geninfo_all_blocks=1 00:29:59.797 --rc geninfo_unexecuted_blocks=1 00:29:59.797 00:29:59.797 ' 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:59.797 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.056 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:00.057 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:00.057 06:35:51 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:06.625 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:06.626 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:06.626 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:06.626 Found net devices under 0000:af:00.0: cvl_0_0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:06.626 Found net devices under 0000:af:00.1: cvl_0_1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:06.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:06.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:30:06.626 00:30:06.626 --- 10.0.0.2 ping statistics --- 00:30:06.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.626 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:06.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:06.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:30:06.626 00:30:06.626 --- 10.0.0.1 ping statistics --- 00:30:06.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:06.626 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=1120108 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 1120108 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1120108 ']' 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:06.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.626 [2024-12-13 06:35:57.386920] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:30:06.626 [2024-12-13 06:35:57.386971] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:06.626 [2024-12-13 06:35:57.465458] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:06.626 [2024-12-13 06:35:57.489107] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:06.626 [2024-12-13 06:35:57.489144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:06.626 [2024-12-13 06:35:57.489151] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:06.626 [2024-12-13 06:35:57.489156] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:06.626 [2024-12-13 06:35:57.489161] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:06.626 [2024-12-13 06:35:57.490445] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:06.626 [2024-12-13 06:35:57.490556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:06.626 [2024-12-13 06:35:57.490590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:06.626 [2024-12-13 06:35:57.490591] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:06.626 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:09.159 06:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:09.159 06:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:09.417 06:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:09.417 06:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:09.675 [2024-12-13 06:36:01.283678] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:09.675 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:09.933 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:09.933 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:10.192 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:10.192 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:10.450 06:36:01 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:10.450 [2024-12-13 06:36:02.089254] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:10.707 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:10.707 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:10.707 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:10.707 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:10.707 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:12.083 Initializing NVMe Controllers 00:30:12.083 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:12.083 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:12.083 Initialization complete. Launching workers. 00:30:12.083 ======================================================== 00:30:12.083 Latency(us) 00:30:12.083 Device Information : IOPS MiB/s Average min max 00:30:12.083 PCIE (0000:5e:00.0) NSID 1 from core 0: 99890.61 390.20 319.90 24.74 4442.27 00:30:12.083 ======================================================== 00:30:12.083 Total : 99890.61 390.20 319.90 24.74 4442.27 00:30:12.083 00:30:12.083 06:36:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:13.460 Initializing NVMe Controllers 00:30:13.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:13.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:13.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:13.460 Initialization complete. Launching workers. 00:30:13.460 ======================================================== 00:30:13.460 Latency(us) 00:30:13.460 Device Information : IOPS MiB/s Average min max 00:30:13.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 96.00 0.38 10645.45 104.39 44875.76 00:30:13.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 47.00 0.18 21547.66 7963.10 47887.46 00:30:13.460 ======================================================== 00:30:13.460 Total : 143.00 0.56 14228.70 104.39 47887.46 00:30:13.460 00:30:13.460 06:36:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:14.834 Initializing NVMe Controllers 00:30:14.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:14.834 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:14.834 Initialization complete. Launching workers. 00:30:14.834 ======================================================== 00:30:14.834 Latency(us) 00:30:14.834 Device Information : IOPS MiB/s Average min max 00:30:14.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11298.56 44.13 2831.87 428.90 6229.61 00:30:14.834 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3765.85 14.71 8535.10 4490.89 17641.00 00:30:14.834 ======================================================== 00:30:14.834 Total : 15064.41 58.85 4257.58 428.90 17641.00 00:30:14.834 00:30:14.834 06:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:14.834 06:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:14.834 06:36:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:17.367 Initializing NVMe Controllers 00:30:17.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.367 Controller IO queue size 128, less than required. 00:30:17.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.367 Controller IO queue size 128, less than required. 00:30:17.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:17.367 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:17.367 Initialization complete. Launching workers. 00:30:17.367 ======================================================== 00:30:17.367 Latency(us) 00:30:17.367 Device Information : IOPS MiB/s Average min max 00:30:17.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1802.21 450.55 72224.05 44264.91 103698.65 00:30:17.367 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 610.90 152.73 221069.15 97527.23 351813.90 00:30:17.367 ======================================================== 00:30:17.367 Total : 2413.11 603.28 109905.57 44264.91 351813.90 00:30:17.367 00:30:17.367 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:17.367 No valid NVMe controllers or AIO or URING devices found 00:30:17.367 Initializing NVMe Controllers 00:30:17.367 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:17.367 Controller IO queue size 128, less than required. 00:30:17.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.367 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:17.367 Controller IO queue size 128, less than required. 00:30:17.367 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:17.367 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:17.367 WARNING: Some requested NVMe devices were skipped 00:30:17.367 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:20.716 Initializing NVMe Controllers 00:30:20.716 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:20.716 Controller IO queue size 128, less than required. 00:30:20.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.716 Controller IO queue size 128, less than required. 00:30:20.716 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:20.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:20.716 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:20.716 Initialization complete. Launching workers. 00:30:20.716 00:30:20.716 ==================== 00:30:20.716 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:20.716 TCP transport: 00:30:20.716 polls: 14907 00:30:20.716 idle_polls: 11211 00:30:20.716 sock_completions: 3696 00:30:20.716 nvme_completions: 6591 00:30:20.716 submitted_requests: 9820 00:30:20.716 queued_requests: 1 00:30:20.716 00:30:20.716 ==================== 00:30:20.716 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:20.716 TCP transport: 00:30:20.716 polls: 15247 00:30:20.716 idle_polls: 11833 00:30:20.716 sock_completions: 3414 00:30:20.716 nvme_completions: 6183 00:30:20.716 submitted_requests: 9286 00:30:20.716 queued_requests: 1 00:30:20.716 ======================================================== 00:30:20.716 Latency(us) 00:30:20.716 Device Information : IOPS MiB/s Average min max 00:30:20.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1647.39 411.85 79525.92 53780.31 128661.72 00:30:20.716 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1545.40 386.35 83194.24 48240.55 127790.71 00:30:20.716 ======================================================== 00:30:20.716 Total : 3192.79 798.20 81301.49 48240.55 128661.72 00:30:20.716 00:30:20.716 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:20.716 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:20.716 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:20.716 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:20.716 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=3c336a11-ed2c-41c4-ba4e-a6994ca7ce85 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 3c336a11-ed2c-41c4-ba4e-a6994ca7ce85 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=3c336a11-ed2c-41c4-ba4e-a6994ca7ce85 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:24.003 { 00:30:24.003 "uuid": "3c336a11-ed2c-41c4-ba4e-a6994ca7ce85", 00:30:24.003 "name": "lvs_0", 00:30:24.003 "base_bdev": "Nvme0n1", 00:30:24.003 "total_data_clusters": 238234, 00:30:24.003 "free_clusters": 238234, 00:30:24.003 "block_size": 512, 00:30:24.003 "cluster_size": 4194304 00:30:24.003 } 00:30:24.003 ]' 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="3c336a11-ed2c-41c4-ba4e-a6994ca7ce85") .free_clusters' 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="3c336a11-ed2c-41c4-ba4e-a6994ca7ce85") .cluster_size' 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:24.003 952936 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:24.003 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3c336a11-ed2c-41c4-ba4e-a6994ca7ce85 lbd_0 20480 00:30:24.570 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=97d89468-5688-4298-8c1f-0f6561a41beb 00:30:24.570 06:36:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 97d89468-5688-4298-8c1f-0f6561a41beb lvs_n_0 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=fb758b22-0da8-4bac-b559-1613ea5c3e7c 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb fb758b22-0da8-4bac-b559-1613ea5c3e7c 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=fb758b22-0da8-4bac-b559-1613ea5c3e7c 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:25.138 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:25.396 { 00:30:25.396 "uuid": "3c336a11-ed2c-41c4-ba4e-a6994ca7ce85", 00:30:25.396 "name": "lvs_0", 00:30:25.396 "base_bdev": "Nvme0n1", 00:30:25.396 "total_data_clusters": 238234, 00:30:25.396 "free_clusters": 233114, 00:30:25.396 "block_size": 512, 00:30:25.396 "cluster_size": 4194304 00:30:25.396 }, 00:30:25.396 { 00:30:25.396 "uuid": "fb758b22-0da8-4bac-b559-1613ea5c3e7c", 00:30:25.396 "name": "lvs_n_0", 00:30:25.396 "base_bdev": "97d89468-5688-4298-8c1f-0f6561a41beb", 00:30:25.396 "total_data_clusters": 5114, 00:30:25.396 "free_clusters": 5114, 00:30:25.396 "block_size": 512, 00:30:25.396 "cluster_size": 4194304 00:30:25.396 } 00:30:25.396 ]' 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="fb758b22-0da8-4bac-b559-1613ea5c3e7c") .free_clusters' 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="fb758b22-0da8-4bac-b559-1613ea5c3e7c") .cluster_size' 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:25.396 20456 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:25.396 06:36:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fb758b22-0da8-4bac-b559-1613ea5c3e7c lbd_nest_0 20456 00:30:25.655 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=d2785498-8051-4731-b8b7-e508431dd554 00:30:25.655 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:25.655 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:25.655 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 d2785498-8051-4731-b8b7-e508431dd554 00:30:25.914 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:26.173 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:26.173 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:26.173 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:26.173 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:26.173 06:36:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:38.378 Initializing NVMe Controllers 00:30:38.378 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:38.378 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:38.378 Initialization complete. Launching workers. 00:30:38.378 ======================================================== 00:30:38.378 Latency(us) 00:30:38.378 Device Information : IOPS MiB/s Average min max 00:30:38.378 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 50.70 0.02 19767.02 127.57 45669.99 00:30:38.378 ======================================================== 00:30:38.378 Total : 50.70 0.02 19767.02 127.57 45669.99 00:30:38.378 00:30:38.378 06:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:38.378 06:36:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.481 Initializing NVMe Controllers 00:30:48.481 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:48.481 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:48.481 Initialization complete. Launching workers. 00:30:48.481 ======================================================== 00:30:48.481 Latency(us) 00:30:48.481 Device Information : IOPS MiB/s Average min max 00:30:48.481 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 62.89 7.86 15963.32 4988.99 51874.65 00:30:48.481 ======================================================== 00:30:48.481 Total : 62.89 7.86 15963.32 4988.99 51874.65 00:30:48.481 00:30:48.481 06:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:48.481 06:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:48.481 06:36:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:58.460 Initializing NVMe Controllers 00:30:58.460 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:58.460 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:58.460 Initialization complete. Launching workers. 00:30:58.460 ======================================================== 00:30:58.460 Latency(us) 00:30:58.460 Device Information : IOPS MiB/s Average min max 00:30:58.460 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8602.90 4.20 3720.11 240.87 6910.23 00:30:58.460 ======================================================== 00:30:58.460 Total : 8602.90 4.20 3720.11 240.87 6910.23 00:30:58.460 00:30:58.460 06:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:58.460 06:36:48 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:08.435 Initializing NVMe Controllers 00:31:08.435 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:08.435 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:08.435 Initialization complete. Launching workers. 00:31:08.435 ======================================================== 00:31:08.435 Latency(us) 00:31:08.435 Device Information : IOPS MiB/s Average min max 00:31:08.435 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4340.17 542.52 7373.28 536.41 18171.92 00:31:08.435 ======================================================== 00:31:08.435 Total : 4340.17 542.52 7373.28 536.41 18171.92 00:31:08.435 00:31:08.435 06:36:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:08.435 06:36:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:08.435 06:36:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:18.407 Initializing NVMe Controllers 00:31:18.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:18.407 Controller IO queue size 128, less than required. 00:31:18.407 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:18.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:18.407 Initialization complete. Launching workers. 00:31:18.407 ======================================================== 00:31:18.407 Latency(us) 00:31:18.407 Device Information : IOPS MiB/s Average min max 00:31:18.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15776.70 7.70 8116.85 1381.75 24193.17 00:31:18.407 ======================================================== 00:31:18.407 Total : 15776.70 7.70 8116.85 1381.75 24193.17 00:31:18.407 00:31:18.407 06:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:18.407 06:37:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:28.384 Initializing NVMe Controllers 00:31:28.384 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:28.384 Controller IO queue size 128, less than required. 00:31:28.384 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:28.384 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:28.384 Initialization complete. Launching workers. 00:31:28.384 ======================================================== 00:31:28.384 Latency(us) 00:31:28.384 Device Information : IOPS MiB/s Average min max 00:31:28.384 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1204.50 150.56 107232.85 16075.12 231642.53 00:31:28.384 ======================================================== 00:31:28.384 Total : 1204.50 150.56 107232.85 16075.12 231642.53 00:31:28.384 00:31:28.384 06:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:28.384 06:37:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d2785498-8051-4731-b8b7-e508431dd554 00:31:29.319 06:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:29.319 06:37:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 97d89468-5688-4298-8c1f-0f6561a41beb 00:31:29.577 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:29.836 rmmod nvme_tcp 00:31:29.836 rmmod nvme_fabrics 00:31:29.836 rmmod nvme_keyring 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 1120108 ']' 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 1120108 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1120108 ']' 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1120108 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1120108 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1120108' 00:31:29.836 killing process with pid 1120108 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1120108 00:31:29.836 06:37:21 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1120108 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:31.740 06:37:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:33.652 00:31:33.652 real 1m33.757s 00:31:33.652 user 5m34.439s 00:31:33.652 sys 0m17.242s 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:33.652 ************************************ 00:31:33.652 END TEST nvmf_perf 00:31:33.652 ************************************ 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:33.652 ************************************ 00:31:33.652 START TEST nvmf_fio_host 00:31:33.652 ************************************ 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:33.652 * Looking for test storage... 00:31:33.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.652 --rc genhtml_branch_coverage=1 00:31:33.652 --rc genhtml_function_coverage=1 00:31:33.652 --rc genhtml_legend=1 00:31:33.652 --rc geninfo_all_blocks=1 00:31:33.652 --rc geninfo_unexecuted_blocks=1 00:31:33.652 00:31:33.652 ' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.652 --rc genhtml_branch_coverage=1 00:31:33.652 --rc genhtml_function_coverage=1 00:31:33.652 --rc genhtml_legend=1 00:31:33.652 --rc geninfo_all_blocks=1 00:31:33.652 --rc geninfo_unexecuted_blocks=1 00:31:33.652 00:31:33.652 ' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.652 --rc genhtml_branch_coverage=1 00:31:33.652 --rc genhtml_function_coverage=1 00:31:33.652 --rc genhtml_legend=1 00:31:33.652 --rc geninfo_all_blocks=1 00:31:33.652 --rc geninfo_unexecuted_blocks=1 00:31:33.652 00:31:33.652 ' 00:31:33.652 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:33.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.652 --rc genhtml_branch_coverage=1 00:31:33.652 --rc genhtml_function_coverage=1 00:31:33.653 --rc genhtml_legend=1 00:31:33.653 --rc geninfo_all_blocks=1 00:31:33.653 --rc geninfo_unexecuted_blocks=1 00:31:33.653 00:31:33.653 ' 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.653 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:33.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:33.912 06:37:25 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:40.482 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:40.483 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:40.483 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:40.483 Found net devices under 0000:af:00.0: cvl_0_0 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:40.483 Found net devices under 0000:af:00.1: cvl_0_1 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:40.483 06:37:30 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:40.483 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.483 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:31:40.483 00:31:40.483 --- 10.0.0.2 ping statistics --- 00:31:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.483 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:40.483 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.483 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:31:40.483 00:31:40.483 --- 10.0.0.1 ping statistics --- 00:31:40.483 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.483 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1137513 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.483 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1137513 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1137513 ']' 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 [2024-12-13 06:37:31.270838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:31:40.484 [2024-12-13 06:37:31.270878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.484 [2024-12-13 06:37:31.351761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.484 [2024-12-13 06:37:31.374520] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.484 [2024-12-13 06:37:31.374558] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.484 [2024-12-13 06:37:31.374565] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.484 [2024-12-13 06:37:31.374571] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.484 [2024-12-13 06:37:31.374576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.484 [2024-12-13 06:37:31.378466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.484 [2024-12-13 06:37:31.378505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.484 [2024-12-13 06:37:31.378531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.484 [2024-12-13 06:37:31.378531] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:40.484 [2024-12-13 06:37:31.654912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:40.484 Malloc1 00:31:40.484 06:37:31 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:40.742 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:40.742 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:41.001 [2024-12-13 06:37:32.517548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:41.001 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:41.260 06:37:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:41.519 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:41.519 fio-3.35 00:31:41.519 Starting 1 thread 00:31:44.052 00:31:44.052 test: (groupid=0, jobs=1): err= 0: pid=1138036: Fri Dec 13 06:37:35 2024 00:31:44.052 read: IOPS=12.0k, BW=46.9MiB/s (49.1MB/s)(94.0MiB/2006msec) 00:31:44.052 slat (nsec): min=1527, max=237263, avg=1689.89, stdev=2172.92 00:31:44.052 clat (usec): min=3101, max=10053, avg=5894.81, stdev=439.36 00:31:44.052 lat (usec): min=3136, max=10055, avg=5896.50, stdev=439.32 00:31:44.052 clat percentiles (usec): 00:31:44.052 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:31:44.052 | 30.00th=[ 5669], 40.00th=[ 5800], 50.00th=[ 5866], 60.00th=[ 5997], 00:31:44.052 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6390], 95.00th=[ 6587], 00:31:44.052 | 99.00th=[ 6849], 99.50th=[ 6980], 99.90th=[ 8094], 99.95th=[ 8717], 00:31:44.052 | 99.99th=[10028] 00:31:44.052 bw ( KiB/s): min=47064, max=48568, per=100.00%, avg=47992.00, stdev=659.83, samples=4 00:31:44.052 iops : min=11766, max=12142, avg=11998.00, stdev=164.96, samples=4 00:31:44.052 write: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(93.6MiB/2006msec); 0 zone resets 00:31:44.052 slat (nsec): min=1567, max=225986, avg=1749.46, stdev=1648.24 00:31:44.052 clat (usec): min=2431, max=9204, avg=4750.31, stdev=375.90 00:31:44.052 lat (usec): min=2446, max=9206, avg=4752.06, stdev=375.97 00:31:44.052 clat percentiles (usec): 00:31:44.052 | 1.00th=[ 3916], 5.00th=[ 4178], 10.00th=[ 4293], 20.00th=[ 4490], 00:31:44.052 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:31:44.052 | 70.00th=[ 4948], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:31:44.052 | 99.00th=[ 5604], 99.50th=[ 5866], 99.90th=[ 7701], 99.95th=[ 8586], 00:31:44.052 | 99.99th=[ 9110] 00:31:44.052 bw ( KiB/s): min=47536, max=48256, per=100.00%, avg=47804.00, stdev=313.64, samples=4 00:31:44.052 iops : min=11884, max=12064, avg=11951.00, stdev=78.41, samples=4 00:31:44.052 lat (msec) : 4=0.82%, 10=99.17%, 20=0.01% 00:31:44.052 cpu : usr=71.42%, sys=27.53%, ctx=87, majf=0, minf=3 00:31:44.052 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:31:44.052 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.052 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.052 issued rwts: total=24062,23963,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.053 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.053 00:31:44.053 Run status group 0 (all jobs): 00:31:44.053 READ: bw=46.9MiB/s (49.1MB/s), 46.9MiB/s-46.9MiB/s (49.1MB/s-49.1MB/s), io=94.0MiB (98.6MB), run=2006-2006msec 00:31:44.053 WRITE: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=93.6MiB (98.2MB), run=2006-2006msec 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:44.053 06:37:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:31:44.053 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:31:44.053 fio-3.35 00:31:44.053 Starting 1 thread 00:31:46.587 00:31:46.587 test: (groupid=0, jobs=1): err= 0: pid=1138524: Fri Dec 13 06:37:38 2024 00:31:46.587 read: IOPS=11.1k, BW=173MiB/s (182MB/s)(347MiB/2004msec) 00:31:46.587 slat (nsec): min=2466, max=81792, avg=2794.56, stdev=1285.49 00:31:46.587 clat (usec): min=1776, max=14189, avg=6629.08, stdev=1627.99 00:31:46.587 lat (usec): min=1778, max=14204, avg=6631.88, stdev=1628.16 00:31:46.587 clat percentiles (usec): 00:31:46.587 | 1.00th=[ 3589], 5.00th=[ 4293], 10.00th=[ 4621], 20.00th=[ 5211], 00:31:46.587 | 30.00th=[ 5604], 40.00th=[ 6063], 50.00th=[ 6587], 60.00th=[ 7046], 00:31:46.587 | 70.00th=[ 7439], 80.00th=[ 7832], 90.00th=[ 8717], 95.00th=[ 9503], 00:31:46.587 | 99.00th=[11076], 99.50th=[11731], 99.90th=[13435], 99.95th=[13698], 00:31:46.587 | 99.99th=[14222] 00:31:46.587 bw ( KiB/s): min=84928, max=94880, per=50.59%, avg=89784.00, stdev=5055.15, samples=4 00:31:46.587 iops : min= 5308, max= 5930, avg=5611.50, stdev=315.95, samples=4 00:31:46.587 write: IOPS=6518, BW=102MiB/s (107MB/s)(184MiB/1803msec); 0 zone resets 00:31:46.587 slat (usec): min=29, max=385, avg=31.70, stdev= 7.67 00:31:46.587 clat (usec): min=2177, max=15192, avg=8524.57, stdev=1491.02 00:31:46.587 lat (usec): min=2206, max=15309, avg=8556.28, stdev=1493.00 00:31:46.587 clat percentiles (usec): 00:31:46.587 | 1.00th=[ 5735], 5.00th=[ 6390], 10.00th=[ 6718], 20.00th=[ 7308], 00:31:46.587 | 30.00th=[ 7701], 40.00th=[ 8029], 50.00th=[ 8455], 60.00th=[ 8717], 00:31:46.587 | 70.00th=[ 9110], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11076], 00:31:46.587 | 99.00th=[12911], 99.50th=[13698], 99.90th=[14877], 99.95th=[15008], 00:31:46.587 | 99.99th=[15139] 00:31:46.587 bw ( KiB/s): min=89184, max=98656, per=89.61%, avg=93464.00, stdev=4733.14, samples=4 00:31:46.587 iops : min= 5574, max= 6166, avg=5841.50, stdev=295.82, samples=4 00:31:46.587 lat (msec) : 2=0.03%, 4=1.85%, 10=90.47%, 20=7.65% 00:31:46.587 cpu : usr=85.67%, sys=13.23%, ctx=182, majf=0, minf=3 00:31:46.587 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:31:46.587 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.587 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:46.587 issued rwts: total=22227,11753,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.587 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:46.587 00:31:46.587 Run status group 0 (all jobs): 00:31:46.587 READ: bw=173MiB/s (182MB/s), 173MiB/s-173MiB/s (182MB/s-182MB/s), io=347MiB (364MB), run=2004-2004msec 00:31:46.587 WRITE: bw=102MiB/s (107MB/s), 102MiB/s-102MiB/s (107MB/s-107MB/s), io=184MiB (193MB), run=1803-1803msec 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:31:46.587 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:31:46.846 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:31:46.846 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:31:46.846 06:37:38 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:31:50.131 Nvme0n1 00:31:50.131 06:37:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:31:52.664 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=46f140b8-673a-4e86-a9e4-0e0f708ffb95 00:31:52.664 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 46f140b8-673a-4e86-a9e4-0e0f708ffb95 00:31:52.664 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=46f140b8-673a-4e86-a9e4-0e0f708ffb95 00:31:52.665 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:52.665 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:52.665 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:52.665 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:52.924 { 00:31:52.924 "uuid": "46f140b8-673a-4e86-a9e4-0e0f708ffb95", 00:31:52.924 "name": "lvs_0", 00:31:52.924 "base_bdev": "Nvme0n1", 00:31:52.924 "total_data_clusters": 930, 00:31:52.924 "free_clusters": 930, 00:31:52.924 "block_size": 512, 00:31:52.924 "cluster_size": 1073741824 00:31:52.924 } 00:31:52.924 ]' 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="46f140b8-673a-4e86-a9e4-0e0f708ffb95") .free_clusters' 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="46f140b8-673a-4e86-a9e4-0e0f708ffb95") .cluster_size' 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:31:52.924 952320 00:31:52.924 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:31:53.182 eb043b2e-a208-432b-ac48-7a0a6a12c83f 00:31:53.182 06:37:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:31:53.441 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:31:53.701 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:53.960 06:37:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:54.219 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:54.219 fio-3.35 00:31:54.219 Starting 1 thread 00:31:56.751 00:31:56.751 test: (groupid=0, jobs=1): err= 0: pid=1140264: Fri Dec 13 06:37:48 2024 00:31:56.751 read: IOPS=8162, BW=31.9MiB/s (33.4MB/s)(64.0MiB/2006msec) 00:31:56.751 slat (nsec): min=1520, max=85315, avg=1643.27, stdev=962.20 00:31:56.751 clat (usec): min=846, max=169831, avg=8636.27, stdev=10207.28 00:31:56.751 lat (usec): min=847, max=169849, avg=8637.92, stdev=10207.42 00:31:56.751 clat percentiles (msec): 00:31:56.751 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:31:56.751 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:31:56.751 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 9], 00:31:56.751 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:31:56.751 | 99.99th=[ 169] 00:31:56.751 bw ( KiB/s): min=23208, max=35872, per=99.83%, avg=32594.00, stdev=6259.10, samples=4 00:31:56.751 iops : min= 5802, max= 8968, avg=8148.50, stdev=1564.77, samples=4 00:31:56.751 write: IOPS=8159, BW=31.9MiB/s (33.4MB/s)(63.9MiB/2006msec); 0 zone resets 00:31:56.751 slat (nsec): min=1556, max=80576, avg=1705.12, stdev=923.16 00:31:56.751 clat (usec): min=219, max=168472, avg=6958.33, stdev=9536.12 00:31:56.751 lat (usec): min=221, max=168476, avg=6960.03, stdev=9536.27 00:31:56.751 clat percentiles (msec): 00:31:56.751 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:31:56.751 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:31:56.751 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:31:56.751 | 99.00th=[ 8], 99.50th=[ 9], 99.90th=[ 169], 99.95th=[ 169], 00:31:56.751 | 99.99th=[ 169] 00:31:56.751 bw ( KiB/s): min=24232, max=35584, per=99.99%, avg=32634.00, stdev=5602.88, samples=4 00:31:56.751 iops : min= 6058, max= 8896, avg=8158.50, stdev=1400.72, samples=4 00:31:56.751 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:31:56.751 lat (msec) : 2=0.05%, 4=0.24%, 10=99.10%, 20=0.20%, 250=0.39% 00:31:56.751 cpu : usr=71.62%, sys=27.53%, ctx=110, majf=0, minf=3 00:31:56.751 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:31:56.751 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:56.751 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:56.751 issued rwts: total=16374,16367,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:56.751 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:56.751 00:31:56.751 Run status group 0 (all jobs): 00:31:56.751 READ: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=64.0MiB (67.1MB), run=2006-2006msec 00:31:56.751 WRITE: bw=31.9MiB/s (33.4MB/s), 31.9MiB/s-31.9MiB/s (33.4MB/s-33.4MB/s), io=63.9MiB (67.0MB), run=2006-2006msec 00:31:56.751 06:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:56.751 06:37:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=37eefb19-ed8c-401e-a002-a56ddb4dd535 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 37eefb19-ed8c-401e-a002-a56ddb4dd535 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=37eefb19-ed8c-401e-a002-a56ddb4dd535 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:31:58.128 { 00:31:58.128 "uuid": "46f140b8-673a-4e86-a9e4-0e0f708ffb95", 00:31:58.128 "name": "lvs_0", 00:31:58.128 "base_bdev": "Nvme0n1", 00:31:58.128 "total_data_clusters": 930, 00:31:58.128 "free_clusters": 0, 00:31:58.128 "block_size": 512, 00:31:58.128 "cluster_size": 1073741824 00:31:58.128 }, 00:31:58.128 { 00:31:58.128 "uuid": "37eefb19-ed8c-401e-a002-a56ddb4dd535", 00:31:58.128 "name": "lvs_n_0", 00:31:58.128 "base_bdev": "eb043b2e-a208-432b-ac48-7a0a6a12c83f", 00:31:58.128 "total_data_clusters": 237847, 00:31:58.128 "free_clusters": 237847, 00:31:58.128 "block_size": 512, 00:31:58.128 "cluster_size": 4194304 00:31:58.128 } 00:31:58.128 ]' 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="37eefb19-ed8c-401e-a002-a56ddb4dd535") .free_clusters' 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="37eefb19-ed8c-401e-a002-a56ddb4dd535") .cluster_size' 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:31:58.128 951388 00:31:58.128 06:37:49 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:31:58.696 d3fbb99d-dcb4-4f3c-9937-0f3f6a05181d 00:31:58.696 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:31:58.954 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:31:58.954 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:59.213 06:37:50 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:59.472 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:59.472 fio-3.35 00:31:59.472 Starting 1 thread 00:32:02.024 00:32:02.024 test: (groupid=0, jobs=1): err= 0: pid=1141170: Fri Dec 13 06:37:53 2024 00:32:02.024 read: IOPS=7945, BW=31.0MiB/s (32.5MB/s)(62.3MiB/2006msec) 00:32:02.024 slat (nsec): min=1526, max=362260, avg=1690.33, stdev=3014.34 00:32:02.024 clat (usec): min=3049, max=14457, avg=8895.41, stdev=773.03 00:32:02.024 lat (usec): min=3052, max=14459, avg=8897.10, stdev=772.94 00:32:02.024 clat percentiles (usec): 00:32:02.024 | 1.00th=[ 7111], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:32:02.024 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8848], 60.00th=[ 9110], 00:32:02.024 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[ 9765], 95.00th=[10159], 00:32:02.024 | 99.00th=[10552], 99.50th=[10683], 99.90th=[13304], 99.95th=[14222], 00:32:02.024 | 99.99th=[14353] 00:32:02.024 bw ( KiB/s): min=30664, max=32256, per=99.87%, avg=31738.00, stdev=727.26, samples=4 00:32:02.024 iops : min= 7666, max= 8064, avg=7934.50, stdev=181.82, samples=4 00:32:02.024 write: IOPS=7922, BW=30.9MiB/s (32.4MB/s)(62.1MiB/2006msec); 0 zone resets 00:32:02.024 slat (nsec): min=1573, max=83528, avg=1738.88, stdev=787.26 00:32:02.024 clat (usec): min=1427, max=13164, avg=7156.90, stdev=640.14 00:32:02.024 lat (usec): min=1431, max=13166, avg=7158.64, stdev=640.11 00:32:02.024 clat percentiles (usec): 00:32:02.024 | 1.00th=[ 5669], 5.00th=[ 6194], 10.00th=[ 6390], 20.00th=[ 6652], 00:32:02.024 | 30.00th=[ 6849], 40.00th=[ 6980], 50.00th=[ 7177], 60.00th=[ 7308], 00:32:02.024 | 70.00th=[ 7439], 80.00th=[ 7635], 90.00th=[ 7898], 95.00th=[ 8160], 00:32:02.024 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[10552], 99.95th=[12387], 00:32:02.024 | 99.99th=[12518] 00:32:02.024 bw ( KiB/s): min=31552, max=31744, per=99.93%, avg=31668.00, stdev=93.64, samples=4 00:32:02.024 iops : min= 7888, max= 7936, avg=7917.00, stdev=23.41, samples=4 00:32:02.024 lat (msec) : 2=0.01%, 4=0.09%, 10=96.49%, 20=3.41% 00:32:02.024 cpu : usr=71.57%, sys=27.53%, ctx=130, majf=0, minf=3 00:32:02.024 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:02.024 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:02.024 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:02.024 issued rwts: total=15938,15892,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:02.024 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:02.024 00:32:02.024 Run status group 0 (all jobs): 00:32:02.024 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.3MiB (65.3MB), run=2006-2006msec 00:32:02.024 WRITE: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=62.1MiB (65.1MB), run=2006-2006msec 00:32:02.024 06:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:02.283 06:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:02.283 06:37:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:06.470 06:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:06.470 06:37:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:09.001 06:38:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:09.259 06:38:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:11.162 rmmod nvme_tcp 00:32:11.162 rmmod nvme_fabrics 00:32:11.162 rmmod nvme_keyring 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 1137513 ']' 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 1137513 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1137513 ']' 00:32:11.162 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1137513 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1137513 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1137513' 00:32:11.163 killing process with pid 1137513 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1137513 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1137513 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:11.163 06:38:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:13.698 00:32:13.698 real 0m39.753s 00:32:13.698 user 2m38.651s 00:32:13.698 sys 0m8.925s 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.698 ************************************ 00:32:13.698 END TEST nvmf_fio_host 00:32:13.698 ************************************ 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:13.698 ************************************ 00:32:13.698 START TEST nvmf_failover 00:32:13.698 ************************************ 00:32:13.698 06:38:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:13.698 * Looking for test storage... 00:32:13.698 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:13.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.698 --rc genhtml_branch_coverage=1 00:32:13.698 --rc genhtml_function_coverage=1 00:32:13.698 --rc genhtml_legend=1 00:32:13.698 --rc geninfo_all_blocks=1 00:32:13.698 --rc geninfo_unexecuted_blocks=1 00:32:13.698 00:32:13.698 ' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:13.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.698 --rc genhtml_branch_coverage=1 00:32:13.698 --rc genhtml_function_coverage=1 00:32:13.698 --rc genhtml_legend=1 00:32:13.698 --rc geninfo_all_blocks=1 00:32:13.698 --rc geninfo_unexecuted_blocks=1 00:32:13.698 00:32:13.698 ' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:13.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.698 --rc genhtml_branch_coverage=1 00:32:13.698 --rc genhtml_function_coverage=1 00:32:13.698 --rc genhtml_legend=1 00:32:13.698 --rc geninfo_all_blocks=1 00:32:13.698 --rc geninfo_unexecuted_blocks=1 00:32:13.698 00:32:13.698 ' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:13.698 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:13.698 --rc genhtml_branch_coverage=1 00:32:13.698 --rc genhtml_function_coverage=1 00:32:13.698 --rc genhtml_legend=1 00:32:13.698 --rc geninfo_all_blocks=1 00:32:13.698 --rc geninfo_unexecuted_blocks=1 00:32:13.698 00:32:13.698 ' 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:13.698 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:13.699 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:13.699 06:38:05 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:20.269 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:20.270 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:20.270 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:20.270 Found net devices under 0000:af:00.0: cvl_0_0 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:20.270 Found net devices under 0000:af:00.1: cvl_0_1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:20.270 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:20.270 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:32:20.270 00:32:20.270 --- 10.0.0.2 ping statistics --- 00:32:20.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.270 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:20.270 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:20.270 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:32:20.270 00:32:20.270 --- 10.0.0.1 ping statistics --- 00:32:20.270 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:20.270 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:20.270 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=1146403 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 1146403 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146403 ']' 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:20.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:20.271 06:38:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:20.271 [2024-12-13 06:38:11.038669] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:20.271 [2024-12-13 06:38:11.038722] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:20.271 [2024-12-13 06:38:11.115981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:20.271 [2024-12-13 06:38:11.138026] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:20.271 [2024-12-13 06:38:11.138065] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:20.271 [2024-12-13 06:38:11.138072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:20.271 [2024-12-13 06:38:11.138078] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:20.271 [2024-12-13 06:38:11.138082] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:20.271 [2024-12-13 06:38:11.139391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:20.271 [2024-12-13 06:38:11.139499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:20.271 [2024-12-13 06:38:11.139499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:20.271 [2024-12-13 06:38:11.451144] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:20.271 Malloc0 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:20.271 06:38:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:20.530 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:20.788 [2024-12-13 06:38:12.263037] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:20.788 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:21.047 [2024-12-13 06:38:12.459545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:21.047 [2024-12-13 06:38:12.656165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1146652 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1146652 /var/tmp/bdevperf.sock 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1146652 ']' 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:21.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:21.047 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:21.306 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:21.306 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:21.306 06:38:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:21.874 NVMe0n1 00:32:21.874 06:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:21.874 00:32:22.133 06:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1146876 00:32:22.133 06:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:22.133 06:38:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:23.069 06:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.329 [2024-12-13 06:38:14.730425] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730515] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730521] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730592] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730609] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730615] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730621] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730683] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 [2024-12-13 06:38:14.730720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba5ac0 is same with the state(6) to be set 00:32:23.329 06:38:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:26.618 06:38:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:26.618 00:32:26.618 06:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:26.618 [2024-12-13 06:38:18.236443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236492] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236524] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236530] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236541] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236558] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236570] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236576] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236595] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236601] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236625] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236631] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236642] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 [2024-12-13 06:38:18.236678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba68e0 is same with the state(6) to be set 00:32:26.618 06:38:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:29.911 06:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:29.911 [2024-12-13 06:38:21.463524] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:29.911 06:38:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:30.863 06:38:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:31.122 [2024-12-13 06:38:22.682546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682586] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 [2024-12-13 06:38:22.682618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ba7690 is same with the state(6) to be set 00:32:31.122 06:38:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1146876 00:32:37.694 { 00:32:37.694 "results": [ 00:32:37.694 { 00:32:37.694 "job": "NVMe0n1", 00:32:37.694 "core_mask": "0x1", 00:32:37.694 "workload": "verify", 00:32:37.694 "status": "finished", 00:32:37.694 "verify_range": { 00:32:37.694 "start": 0, 00:32:37.694 "length": 16384 00:32:37.694 }, 00:32:37.694 "queue_depth": 128, 00:32:37.694 "io_size": 4096, 00:32:37.694 "runtime": 15.008644, 00:32:37.694 "iops": 11140.313541982874, 00:32:37.694 "mibps": 43.5168497733706, 00:32:37.694 "io_failed": 16613, 00:32:37.694 "io_timeout": 0, 00:32:37.694 "avg_latency_us": 10429.797624617431, 00:32:37.694 "min_latency_us": 417.40190476190475, 00:32:37.694 "max_latency_us": 18599.74095238095 00:32:37.694 } 00:32:37.694 ], 00:32:37.694 "core_count": 1 00:32:37.694 } 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146652 ']' 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146652' 00:32:37.694 killing process with pid 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146652 00:32:37.694 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:37.694 [2024-12-13 06:38:12.734928] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:37.694 [2024-12-13 06:38:12.734980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1146652 ] 00:32:37.694 [2024-12-13 06:38:12.813309] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:37.694 [2024-12-13 06:38:12.835718] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:37.694 Running I/O for 15 seconds... 00:32:37.694 11318.00 IOPS, 44.21 MiB/s [2024-12-13T05:38:29.348Z] [2024-12-13 06:38:14.731016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.694 [2024-12-13 06:38:14.731348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.694 [2024-12-13 06:38:14.731354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.695 [2024-12-13 06:38:14.731951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.695 [2024-12-13 06:38:14.731959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.731967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.731975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.731982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.731990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.731996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.696 [2024-12-13 06:38:14.732553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.696 [2024-12-13 06:38:14.732559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:14.732688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:99840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:99848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:99856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:99872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:99880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:14.732922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.732942] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.697 [2024-12-13 06:38:14.732949] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.697 [2024-12-13 06:38:14.732956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99888 len:8 PRP1 0x0 PRP2 0x0 00:32:37.697 [2024-12-13 06:38:14.732962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.733004] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:37.697 [2024-12-13 06:38:14.733025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.697 [2024-12-13 06:38:14.733032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.733040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.697 [2024-12-13 06:38:14.733046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.733053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.697 [2024-12-13 06:38:14.733060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.733066] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.697 [2024-12-13 06:38:14.733073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:14.733080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:37.697 [2024-12-13 06:38:14.735983] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:37.697 [2024-12-13 06:38:14.736011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0460 (9): Bad file descriptor 00:32:37.697 [2024-12-13 06:38:14.892695] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:37.697 10435.00 IOPS, 40.76 MiB/s [2024-12-13T05:38:29.351Z] 10743.00 IOPS, 41.96 MiB/s [2024-12-13T05:38:29.351Z] 10921.25 IOPS, 42.66 MiB/s [2024-12-13T05:38:29.351Z] [2024-12-13 06:38:18.236833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:71216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.697 [2024-12-13 06:38:18.236867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:71288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:18.236889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:71296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:18.236909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:71304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:18.236925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:71312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:18.236940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:71320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.697 [2024-12-13 06:38:18.236954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.697 [2024-12-13 06:38:18.236962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:71328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.236969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.236977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:71336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.236983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.236991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:71344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.236998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:71352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:71360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:71368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:71384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:71392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:71400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:71408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:71416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:71424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:71432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:71440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:71448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:71456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:71464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:71472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:71496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:71504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:71512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:71520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:71528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:71536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:71544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:71552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:71560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:71576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:71584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.698 [2024-12-13 06:38:18.237440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.698 [2024-12-13 06:38:18.237453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:71592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:71600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:71608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:71616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:71624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:71632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:71640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:71648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:71656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:71672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:71680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:71688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:71696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:71704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:71712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:71728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:71736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:71744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:71248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.699 [2024-12-13 06:38:18.237845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:71752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:71760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:71768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:71776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:71784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:71792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:71800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:71808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:71816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:71824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.237989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.237996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:71832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.238003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.238010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:71840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.238018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.238026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:71848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.238033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.238041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:71856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.699 [2024-12-13 06:38:18.238047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.699 [2024-12-13 06:38:18.238056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:71864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:71872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:71880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:71896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:71904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:71912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:71920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:71928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:71936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:71944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.700 [2024-12-13 06:38:18.238253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238274] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71976 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238297] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238305] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71984 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:71992 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238351] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72000 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238369] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72008 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72016 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72024 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238442] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72032 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238469] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238474] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72040 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238492] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238498] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72048 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72056 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238540] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238545] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72064 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238564] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72072 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238586] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238592] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72080 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238610] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238614] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72088 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238633] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.700 [2024-12-13 06:38:18.238638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.700 [2024-12-13 06:38:18.238643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72096 len:8 PRP1 0x0 PRP2 0x0 00:32:37.700 [2024-12-13 06:38:18.238649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.700 [2024-12-13 06:38:18.238655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72104 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238677] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72112 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72120 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238724] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72128 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238748] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72136 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238772] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238776] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72144 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.238795] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.238800] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.238805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72152 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.238811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250313] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72160 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250337] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72168 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250365] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72176 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250394] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71280 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250424] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250430] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72184 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72192 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72200 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250530] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72208 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72216 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250585] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72224 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250607] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.701 [2024-12-13 06:38:18.250613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.701 [2024-12-13 06:38:18.250619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72232 len:8 PRP1 0x0 PRP2 0x0 00:32:37.701 [2024-12-13 06:38:18.250628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250673] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:37.701 [2024-12-13 06:38:18.250697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:18.250707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:18.250723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:18.250740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250749] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:18.250760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:18.250768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:37.701 [2024-12-13 06:38:18.250791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0460 (9): Bad file descriptor 00:32:37.701 [2024-12-13 06:38:18.254780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:37.701 [2024-12-13 06:38:18.278621] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:37.701 10909.60 IOPS, 42.62 MiB/s [2024-12-13T05:38:29.355Z] 11007.33 IOPS, 43.00 MiB/s [2024-12-13T05:38:29.355Z] 11056.71 IOPS, 43.19 MiB/s [2024-12-13T05:38:29.355Z] 11115.00 IOPS, 43.42 MiB/s [2024-12-13T05:38:29.355Z] 11143.11 IOPS, 43.53 MiB/s [2024-12-13T05:38:29.355Z] [2024-12-13 06:38:22.682402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:22.682442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:22.682459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:22.682466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:22.682485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.701 [2024-12-13 06:38:22.682492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.701 [2024-12-13 06:38:22.682500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:37.702 [2024-12-13 06:38:22.682506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xec0460 is same with the state(6) to be set 00:32:37.702 [2024-12-13 06:38:22.682716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:89280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:89288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:89296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:89304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:89312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:89320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:89328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:89344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:89352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:89360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:89368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:89376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:89384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:89400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:89408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.682987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.682995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:89416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:89432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:88600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:88608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:88616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:88624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:88632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:88640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.702 [2024-12-13 06:38:22.683135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:89440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:89448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:89456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:89464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:89472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:89480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:89488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:89496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:89504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:89512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:89520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.702 [2024-12-13 06:38:22.683301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.702 [2024-12-13 06:38:22.683308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:89528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:89536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:89544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:89552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:89560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:89568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:89576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:89592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:89600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:88664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:88672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:88696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:88712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:88720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:88728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:88744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:88752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:88760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:88784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:88792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:88808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:88816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:89608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:37.703 [2024-12-13 06:38:22.683808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:88832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:88840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:88848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.703 [2024-12-13 06:38:22.683851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.703 [2024-12-13 06:38:22.683859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:88856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:88872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:88888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:88896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:88904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:88912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.683986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.683994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:88928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:88936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:88944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:88952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:88960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:88968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:88976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:88984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:88992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:89000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:89008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:89016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:89032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:89040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:89056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:89064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:89072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:89080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:89088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:89096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:89104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:89112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:89120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:89144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:89168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.704 [2024-12-13 06:38:22.684465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.704 [2024-12-13 06:38:22.684474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:89192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:89200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:89216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:89224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:89248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:37.705 [2024-12-13 06:38:22.684630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:37.705 [2024-12-13 06:38:22.684655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:37.705 [2024-12-13 06:38:22.684661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89272 len:8 PRP1 0x0 PRP2 0x0 00:32:37.705 [2024-12-13 06:38:22.684668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:37.705 [2024-12-13 06:38:22.684710] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:37.705 [2024-12-13 06:38:22.684721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:37.705 [2024-12-13 06:38:22.687708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:37.705 [2024-12-13 06:38:22.687737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xec0460 (9): Bad file descriptor 00:32:37.705 [2024-12-13 06:38:22.846267] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:37.705 10990.70 IOPS, 42.93 MiB/s [2024-12-13T05:38:29.359Z] 11031.73 IOPS, 43.09 MiB/s [2024-12-13T05:38:29.359Z] 11063.67 IOPS, 43.22 MiB/s [2024-12-13T05:38:29.359Z] 11091.46 IOPS, 43.33 MiB/s [2024-12-13T05:38:29.359Z] 11117.93 IOPS, 43.43 MiB/s [2024-12-13T05:38:29.359Z] 11138.27 IOPS, 43.51 MiB/s 00:32:37.705 Latency(us) 00:32:37.705 [2024-12-13T05:38:29.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.705 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:37.705 Verification LBA range: start 0x0 length 0x4000 00:32:37.705 NVMe0n1 : 15.01 11140.31 43.52 1106.90 0.00 10429.80 417.40 18599.74 00:32:37.705 [2024-12-13T05:38:29.359Z] =================================================================================================================== 00:32:37.705 [2024-12-13T05:38:29.359Z] Total : 11140.31 43.52 1106.90 0.00 10429.80 417.40 18599.74 00:32:37.705 Received shutdown signal, test time was about 15.000000 seconds 00:32:37.705 00:32:37.705 Latency(us) 00:32:37.705 [2024-12-13T05:38:29.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:37.705 [2024-12-13T05:38:29.359Z] =================================================================================================================== 00:32:37.705 [2024-12-13T05:38:29.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1149318 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1149318 /var/tmp/bdevperf.sock 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1149318 ']' 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:37.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.705 06:38:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:37.705 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.705 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:37.705 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:37.705 [2024-12-13 06:38:29.335045] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.964 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:37.964 [2024-12-13 06:38:29.523545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:37.964 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:38.222 NVMe0n1 00:32:38.222 06:38:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:38.789 00:32:38.789 06:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.048 00:32:39.048 06:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:39.048 06:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:39.048 06:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:39.306 06:38:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:42.592 06:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:42.592 06:38:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:42.592 06:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1150071 00:32:42.592 06:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:42.592 06:38:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1150071 00:32:43.531 { 00:32:43.531 "results": [ 00:32:43.531 { 00:32:43.531 "job": "NVMe0n1", 00:32:43.531 "core_mask": "0x1", 00:32:43.531 "workload": "verify", 00:32:43.531 "status": "finished", 00:32:43.531 "verify_range": { 00:32:43.531 "start": 0, 00:32:43.531 "length": 16384 00:32:43.531 }, 00:32:43.531 "queue_depth": 128, 00:32:43.531 "io_size": 4096, 00:32:43.531 "runtime": 1.006546, 00:32:43.532 "iops": 11328.841404168314, 00:32:43.532 "mibps": 44.253286735032475, 00:32:43.532 "io_failed": 0, 00:32:43.532 "io_timeout": 0, 00:32:43.532 "avg_latency_us": 11243.924446281888, 00:32:43.532 "min_latency_us": 1817.8438095238096, 00:32:43.532 "max_latency_us": 14293.089523809524 00:32:43.532 } 00:32:43.532 ], 00:32:43.532 "core_count": 1 00:32:43.532 } 00:32:43.790 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:43.790 [2024-12-13 06:38:28.972605] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:43.790 [2024-12-13 06:38:28.972654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1149318 ] 00:32:43.790 [2024-12-13 06:38:29.049197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:43.790 [2024-12-13 06:38:29.068663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:43.790 [2024-12-13 06:38:30.830612] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:43.790 [2024-12-13 06:38:30.830660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.790 [2024-12-13 06:38:30.830671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.790 [2024-12-13 06:38:30.830680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.790 [2024-12-13 06:38:30.830686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.790 [2024-12-13 06:38:30.830693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.790 [2024-12-13 06:38:30.830700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.790 [2024-12-13 06:38:30.830707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:43.790 [2024-12-13 06:38:30.830714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:43.790 [2024-12-13 06:38:30.830720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:32:43.790 [2024-12-13 06:38:30.830747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:32:43.790 [2024-12-13 06:38:30.830762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x677460 (9): Bad file descriptor 00:32:43.790 [2024-12-13 06:38:30.841294] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:32:43.790 Running I/O for 1 seconds... 00:32:43.790 11259.00 IOPS, 43.98 MiB/s 00:32:43.790 Latency(us) 00:32:43.790 [2024-12-13T05:38:35.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:43.790 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:43.790 Verification LBA range: start 0x0 length 0x4000 00:32:43.790 NVMe0n1 : 1.01 11328.84 44.25 0.00 0.00 11243.92 1817.84 14293.09 00:32:43.790 [2024-12-13T05:38:35.444Z] =================================================================================================================== 00:32:43.790 [2024-12-13T05:38:35.444Z] Total : 11328.84 44.25 0.00 0.00 11243.92 1817.84 14293.09 00:32:43.790 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:43.790 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:32:43.790 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:44.049 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:44.049 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:32:44.308 06:38:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:44.566 06:38:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1149318 ']' 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1149318' 00:32:47.854 killing process with pid 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1149318 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:32:47.854 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.114 rmmod nvme_tcp 00:32:48.114 rmmod nvme_fabrics 00:32:48.114 rmmod nvme_keyring 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 1146403 ']' 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 1146403 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1146403 ']' 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1146403 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1146403 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1146403' 00:32:48.114 killing process with pid 1146403 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1146403 00:32:48.114 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1146403 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.374 06:38:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.914 06:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:50.914 00:32:50.914 real 0m37.059s 00:32:50.914 user 1m57.437s 00:32:50.914 sys 0m7.858s 00:32:50.914 06:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:50.914 06:38:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:50.914 ************************************ 00:32:50.914 END TEST nvmf_failover 00:32:50.914 ************************************ 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.914 ************************************ 00:32:50.914 START TEST nvmf_host_discovery 00:32:50.914 ************************************ 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:32:50.914 * Looking for test storage... 00:32:50.914 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:32:50.914 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:50.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.915 --rc genhtml_branch_coverage=1 00:32:50.915 --rc genhtml_function_coverage=1 00:32:50.915 --rc genhtml_legend=1 00:32:50.915 --rc geninfo_all_blocks=1 00:32:50.915 --rc geninfo_unexecuted_blocks=1 00:32:50.915 00:32:50.915 ' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:50.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.915 --rc genhtml_branch_coverage=1 00:32:50.915 --rc genhtml_function_coverage=1 00:32:50.915 --rc genhtml_legend=1 00:32:50.915 --rc geninfo_all_blocks=1 00:32:50.915 --rc geninfo_unexecuted_blocks=1 00:32:50.915 00:32:50.915 ' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:50.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.915 --rc genhtml_branch_coverage=1 00:32:50.915 --rc genhtml_function_coverage=1 00:32:50.915 --rc genhtml_legend=1 00:32:50.915 --rc geninfo_all_blocks=1 00:32:50.915 --rc geninfo_unexecuted_blocks=1 00:32:50.915 00:32:50.915 ' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:50.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:50.915 --rc genhtml_branch_coverage=1 00:32:50.915 --rc genhtml_function_coverage=1 00:32:50.915 --rc genhtml_legend=1 00:32:50.915 --rc geninfo_all_blocks=1 00:32:50.915 --rc geninfo_unexecuted_blocks=1 00:32:50.915 00:32:50.915 ' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:50.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:50.915 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:32:50.916 06:38:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:56.194 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:56.194 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:56.194 Found net devices under 0000:af:00.0: cvl_0_0 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:56.194 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:56.195 Found net devices under 0000:af:00.1: cvl_0_1 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:56.195 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:56.454 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:56.454 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:56.454 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:56.454 06:38:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:56.454 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:56.454 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.361 ms 00:32:56.454 00:32:56.454 --- 10.0.0.2 ping statistics --- 00:32:56.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.454 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:56.454 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:56.454 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:32:56.454 00:32:56.454 --- 10.0.0.1 ping statistics --- 00:32:56.454 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:56.454 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=1154373 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 1154373 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154373 ']' 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:56.454 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.455 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:56.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:56.455 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.455 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.714 [2024-12-13 06:38:48.154215] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:56.714 [2024-12-13 06:38:48.154256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:56.714 [2024-12-13 06:38:48.229394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.714 [2024-12-13 06:38:48.250331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:56.714 [2024-12-13 06:38:48.250361] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:56.714 [2024-12-13 06:38:48.250368] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:56.714 [2024-12-13 06:38:48.250374] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:56.714 [2024-12-13 06:38:48.250379] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:56.714 [2024-12-13 06:38:48.250857] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:56.714 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:56.714 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:56.714 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:56.714 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:56.714 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 [2024-12-13 06:38:48.393011] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 [2024-12-13 06:38:48.405181] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 null0 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 null1 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:56.973 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1154481 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1154481 /tmp/host.sock 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1154481 ']' 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:56.974 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:56.974 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:56.974 [2024-12-13 06:38:48.480144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:32:56.974 [2024-12-13 06:38:48.480188] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1154481 ] 00:32:56.974 [2024-12-13 06:38:48.553473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:56.974 [2024-12-13 06:38:48.576256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.233 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 [2024-12-13 06:38:48.986653] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.493 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:57.494 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:57.752 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:57.752 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:32:57.753 06:38:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:58.321 [2024-12-13 06:38:49.731971] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:58.321 [2024-12-13 06:38:49.731993] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:58.321 [2024-12-13 06:38:49.732004] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.321 [2024-12-13 06:38:49.858383] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:58.580 [2024-12-13 06:38:50.041342] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:32:58.580 [2024-12-13 06:38:50.042080] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2417c60:1 started. 00:32:58.580 [2024-12-13 06:38:50.043466] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:58.580 [2024-12-13 06:38:50.043484] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:58.580 [2024-12-13 06:38:50.049945] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2417c60 was disconnected and freed. delete nvme_qpair. 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:58.580 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.840 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.841 [2024-12-13 06:38:50.373582] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x2417fe0:1 started. 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:58.841 [2024-12-13 06:38:50.380634] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x2417fe0 was disconnected and freed. delete nvme_qpair. 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.841 [2024-12-13 06:38:50.478682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:58.841 [2024-12-13 06:38:50.479358] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:32:58.841 [2024-12-13 06:38:50.479378] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:58.841 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:32:59.100 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:59.101 [2024-12-13 06:38:50.606743] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:32:59.101 06:38:50 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:32:59.360 [2024-12-13 06:38:50.831824] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:32:59.360 [2024-12-13 06:38:50.831858] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:59.360 [2024-12-13 06:38:50.831865] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:59.360 [2024-12-13 06:38:50.831870] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.299 [2024-12-13 06:38:51.738270] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:00.299 [2024-12-13 06:38:51.738291] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:00.299 [2024-12-13 06:38:51.745610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.299 [2024-12-13 06:38:51.745627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.299 [2024-12-13 06:38:51.745636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.299 [2024-12-13 06:38:51.745643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.299 [2024-12-13 06:38:51.745650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.299 [2024-12-13 06:38:51.745657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.299 [2024-12-13 06:38:51.745664] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.299 [2024-12-13 06:38:51.745670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.299 [2024-12-13 06:38:51.745677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.299 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.300 [2024-12-13 06:38:51.755623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.300 [2024-12-13 06:38:51.765658] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.300 [2024-12-13 06:38:51.765669] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.300 [2024-12-13 06:38:51.765675] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.765680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.300 [2024-12-13 06:38:51.765695] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.765957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.300 [2024-12-13 06:38:51.765971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.300 [2024-12-13 06:38:51.765979] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.300 [2024-12-13 06:38:51.765990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.300 [2024-12-13 06:38:51.766000] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.300 [2024-12-13 06:38:51.766007] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.300 [2024-12-13 06:38:51.766017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.300 [2024-12-13 06:38:51.766024] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.300 [2024-12-13 06:38:51.766029] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.300 [2024-12-13 06:38:51.766033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.300 [2024-12-13 06:38:51.775725] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.300 [2024-12-13 06:38:51.775735] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.300 [2024-12-13 06:38:51.775739] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.775743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.300 [2024-12-13 06:38:51.775756] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.775998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.300 [2024-12-13 06:38:51.776010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.300 [2024-12-13 06:38:51.776017] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.300 [2024-12-13 06:38:51.776028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.300 [2024-12-13 06:38:51.776038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.300 [2024-12-13 06:38:51.776044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.300 [2024-12-13 06:38:51.776051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.300 [2024-12-13 06:38:51.776056] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.300 [2024-12-13 06:38:51.776061] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.300 [2024-12-13 06:38:51.776065] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.300 [2024-12-13 06:38:51.785803] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.300 [2024-12-13 06:38:51.785816] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.300 [2024-12-13 06:38:51.785820] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.785824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.300 [2024-12-13 06:38:51.785838] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.786099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.300 [2024-12-13 06:38:51.786112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.300 [2024-12-13 06:38:51.786119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.300 [2024-12-13 06:38:51.786129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.300 [2024-12-13 06:38:51.786139] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.300 [2024-12-13 06:38:51.786148] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.300 [2024-12-13 06:38:51.786154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.300 [2024-12-13 06:38:51.786160] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.300 [2024-12-13 06:38:51.786164] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.300 [2024-12-13 06:38:51.786168] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.300 [2024-12-13 06:38:51.795868] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.300 [2024-12-13 06:38:51.795880] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.300 [2024-12-13 06:38:51.795884] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.795888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.300 [2024-12-13 06:38:51.795900] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.796070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.300 [2024-12-13 06:38:51.796081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.300 [2024-12-13 06:38:51.796089] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.300 [2024-12-13 06:38:51.796098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.300 [2024-12-13 06:38:51.796107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.300 [2024-12-13 06:38:51.796113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.300 [2024-12-13 06:38:51.796119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.300 [2024-12-13 06:38:51.796124] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.300 [2024-12-13 06:38:51.796129] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.300 [2024-12-13 06:38:51.796132] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.300 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.300 [2024-12-13 06:38:51.805931] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.300 [2024-12-13 06:38:51.805944] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.300 [2024-12-13 06:38:51.805948] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.805952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.300 [2024-12-13 06:38:51.805966] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.300 [2024-12-13 06:38:51.806068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.300 [2024-12-13 06:38:51.806081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.300 [2024-12-13 06:38:51.806088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.301 [2024-12-13 06:38:51.806099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.301 [2024-12-13 06:38:51.806109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.301 [2024-12-13 06:38:51.806115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.301 [2024-12-13 06:38:51.806122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.301 [2024-12-13 06:38:51.806128] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.301 [2024-12-13 06:38:51.806132] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.301 [2024-12-13 06:38:51.806136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.301 [2024-12-13 06:38:51.815996] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:00.301 [2024-12-13 06:38:51.816006] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:00.301 [2024-12-13 06:38:51.816010] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:00.301 [2024-12-13 06:38:51.816014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:00.301 [2024-12-13 06:38:51.816027] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:00.301 [2024-12-13 06:38:51.816247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:00.301 [2024-12-13 06:38:51.816258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9d70 with addr=10.0.0.2, port=4420 00:33:00.301 [2024-12-13 06:38:51.816266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23e9d70 is same with the state(6) to be set 00:33:00.301 [2024-12-13 06:38:51.816276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23e9d70 (9): Bad file descriptor 00:33:00.301 [2024-12-13 06:38:51.816286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:00.301 [2024-12-13 06:38:51.816292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:00.301 [2024-12-13 06:38:51.816298] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:00.301 [2024-12-13 06:38:51.816307] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:00.301 [2024-12-13 06:38:51.816311] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:00.301 [2024-12-13 06:38:51.816315] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:00.301 [2024-12-13 06:38:51.825532] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:00.301 [2024-12-13 06:38:51.825547] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.301 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.561 06:38:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:00.561 06:38:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.736 [2024-12-13 06:38:53.149595] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:01.736 [2024-12-13 06:38:53.149612] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:01.736 [2024-12-13 06:38:53.149624] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:01.736 [2024-12-13 06:38:53.237873] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:01.996 [2024-12-13 06:38:53.549223] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:01.996 [2024-12-13 06:38:53.549816] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x2423be0:1 started. 00:33:01.996 [2024-12-13 06:38:53.551379] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:01.996 [2024-12-13 06:38:53.551406] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.996 [2024-12-13 06:38:53.559153] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x2423be0 was disconnected and freed. delete nvme_qpair. 00:33:01.996 request: 00:33:01.996 { 00:33:01.996 "name": "nvme", 00:33:01.996 "trtype": "tcp", 00:33:01.996 "traddr": "10.0.0.2", 00:33:01.996 "adrfam": "ipv4", 00:33:01.996 "trsvcid": "8009", 00:33:01.996 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:01.996 "wait_for_attach": true, 00:33:01.996 "method": "bdev_nvme_start_discovery", 00:33:01.996 "req_id": 1 00:33:01.996 } 00:33:01.996 Got JSON-RPC error response 00:33:01.996 response: 00:33:01.996 { 00:33:01.996 "code": -17, 00:33:01.996 "message": "File exists" 00:33:01.996 } 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:01.996 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.256 request: 00:33:02.256 { 00:33:02.256 "name": "nvme_second", 00:33:02.256 "trtype": "tcp", 00:33:02.256 "traddr": "10.0.0.2", 00:33:02.256 "adrfam": "ipv4", 00:33:02.256 "trsvcid": "8009", 00:33:02.256 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:02.256 "wait_for_attach": true, 00:33:02.256 "method": "bdev_nvme_start_discovery", 00:33:02.256 "req_id": 1 00:33:02.256 } 00:33:02.256 Got JSON-RPC error response 00:33:02.256 response: 00:33:02.256 { 00:33:02.256 "code": -17, 00:33:02.256 "message": "File exists" 00:33:02.256 } 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:02.256 06:38:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:03.193 [2024-12-13 06:38:54.790814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:03.193 [2024-12-13 06:38:54.790840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9730 with addr=10.0.0.2, port=8010 00:33:03.194 [2024-12-13 06:38:54.790853] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:03.194 [2024-12-13 06:38:54.790860] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:03.194 [2024-12-13 06:38:54.790866] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:04.571 [2024-12-13 06:38:55.793239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:04.571 [2024-12-13 06:38:55.793262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e9730 with addr=10.0.0.2, port=8010 00:33:04.571 [2024-12-13 06:38:55.793273] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:04.571 [2024-12-13 06:38:55.793279] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:04.571 [2024-12-13 06:38:55.793285] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:05.508 [2024-12-13 06:38:56.795417] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:05.508 request: 00:33:05.508 { 00:33:05.508 "name": "nvme_second", 00:33:05.508 "trtype": "tcp", 00:33:05.509 "traddr": "10.0.0.2", 00:33:05.509 "adrfam": "ipv4", 00:33:05.509 "trsvcid": "8010", 00:33:05.509 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:05.509 "wait_for_attach": false, 00:33:05.509 "attach_timeout_ms": 3000, 00:33:05.509 "method": "bdev_nvme_start_discovery", 00:33:05.509 "req_id": 1 00:33:05.509 } 00:33:05.509 Got JSON-RPC error response 00:33:05.509 response: 00:33:05.509 { 00:33:05.509 "code": -110, 00:33:05.509 "message": "Connection timed out" 00:33:05.509 } 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1154481 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:05.509 rmmod nvme_tcp 00:33:05.509 rmmod nvme_fabrics 00:33:05.509 rmmod nvme_keyring 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 1154373 ']' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 1154373 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1154373 ']' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1154373 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1154373 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1154373' 00:33:05.509 killing process with pid 1154373 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1154373 00:33:05.509 06:38:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1154373 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.509 06:38:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:08.048 00:33:08.048 real 0m17.141s 00:33:08.048 user 0m20.552s 00:33:08.048 sys 0m5.764s 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:08.048 ************************************ 00:33:08.048 END TEST nvmf_host_discovery 00:33:08.048 ************************************ 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.048 ************************************ 00:33:08.048 START TEST nvmf_host_multipath_status 00:33:08.048 ************************************ 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:08.048 * Looking for test storage... 00:33:08.048 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:08.048 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:08.048 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.048 --rc genhtml_branch_coverage=1 00:33:08.048 --rc genhtml_function_coverage=1 00:33:08.048 --rc genhtml_legend=1 00:33:08.048 --rc geninfo_all_blocks=1 00:33:08.049 --rc geninfo_unexecuted_blocks=1 00:33:08.049 00:33:08.049 ' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:08.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.049 --rc genhtml_branch_coverage=1 00:33:08.049 --rc genhtml_function_coverage=1 00:33:08.049 --rc genhtml_legend=1 00:33:08.049 --rc geninfo_all_blocks=1 00:33:08.049 --rc geninfo_unexecuted_blocks=1 00:33:08.049 00:33:08.049 ' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:08.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.049 --rc genhtml_branch_coverage=1 00:33:08.049 --rc genhtml_function_coverage=1 00:33:08.049 --rc genhtml_legend=1 00:33:08.049 --rc geninfo_all_blocks=1 00:33:08.049 --rc geninfo_unexecuted_blocks=1 00:33:08.049 00:33:08.049 ' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:08.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:08.049 --rc genhtml_branch_coverage=1 00:33:08.049 --rc genhtml_function_coverage=1 00:33:08.049 --rc genhtml_legend=1 00:33:08.049 --rc geninfo_all_blocks=1 00:33:08.049 --rc geninfo_unexecuted_blocks=1 00:33:08.049 00:33:08.049 ' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:08.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:08.049 06:38:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.622 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:14.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:14.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:14.623 Found net devices under 0000:af:00.0: cvl_0_0 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:14.623 Found net devices under 0000:af:00.1: cvl_0_1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:33:14.623 00:33:14.623 --- 10.0.0.2 ping statistics --- 00:33:14.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.623 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:33:14.623 00:33:14.623 --- 10.0.0.1 ping statistics --- 00:33:14.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.623 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:33:14.623 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=1159512 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 1159512 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159512 ']' 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.624 [2024-12-13 06:39:05.502606] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:14.624 [2024-12-13 06:39:05.502664] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.624 [2024-12-13 06:39:05.584940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:14.624 [2024-12-13 06:39:05.607942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.624 [2024-12-13 06:39:05.607982] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.624 [2024-12-13 06:39:05.607989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.624 [2024-12-13 06:39:05.607996] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.624 [2024-12-13 06:39:05.608001] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.624 [2024-12-13 06:39:05.609154] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.624 [2024-12-13 06:39:05.609155] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1159512 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:14.624 [2024-12-13 06:39:05.921171] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.624 06:39:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:14.624 Malloc0 00:33:14.624 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:14.883 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:15.142 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:15.142 [2024-12-13 06:39:06.720973] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:15.142 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:15.401 [2024-12-13 06:39:06.921464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1159799 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1159799 /var/tmp/bdevperf.sock 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1159799 ']' 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:15.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:15.401 06:39:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:15.661 06:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:15.661 06:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:15.661 06:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:15.920 06:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:16.488 Nvme0n1 00:33:16.488 06:39:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:16.747 Nvme0n1 00:33:16.747 06:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:16.747 06:39:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:18.653 06:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:18.653 06:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:18.912 06:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:19.171 06:39:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:20.109 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:20.109 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:20.109 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.109 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:20.368 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.368 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:20.368 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.368 06:39:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:20.627 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:20.627 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:20.627 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.627 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:20.886 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:21.145 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.145 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:21.145 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:21.145 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:21.404 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:21.404 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:21.404 06:39:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:21.663 06:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:21.922 06:39:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:22.860 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:22.860 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:22.860 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:22.860 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:23.120 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:23.120 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:23.120 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:23.120 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.379 06:39:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:23.643 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.643 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:23.643 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.643 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:23.902 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:23.902 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:23.902 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:23.902 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:24.161 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:24.161 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:24.161 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:24.420 06:39:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:24.420 06:39:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:25.798 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.057 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:26.316 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.316 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:26.316 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.316 06:39:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:26.576 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.576 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:26.576 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:26.576 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:26.834 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:26.834 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:26.835 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:27.093 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:27.093 06:39:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.471 06:39:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:28.926 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:29.185 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:29.185 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:29.185 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:29.185 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:29.443 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:29.443 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:29.443 06:39:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:29.702 06:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:29.960 06:39:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:30.896 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:30.896 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:30.896 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:30.896 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.155 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:31.412 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.412 06:39:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:31.412 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.412 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:31.412 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.412 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:31.670 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:31.670 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:31.670 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:31.670 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.929 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:31.929 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:31.929 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:31.929 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:32.187 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:32.187 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:32.187 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:32.187 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:32.446 06:39:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:33.382 06:39:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:33.382 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:33.382 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.382 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:33.640 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:33.640 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:33.640 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.640 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:33.898 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:33.898 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:33.898 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:33.898 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:34.157 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.157 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:34.157 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.157 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:34.416 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.416 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:34.416 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.416 06:39:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:34.416 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:34.416 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:34.416 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:34.416 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:34.674 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:34.674 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:34.933 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:34.933 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:35.192 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:35.450 06:39:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:36.387 06:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:36.387 06:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.387 06:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.387 06:39:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:36.645 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.645 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:36.645 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.645 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:36.905 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:36.905 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:36.905 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.905 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.164 06:39:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:37.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:37.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:37.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.682 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.682 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:37.682 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:37.940 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:38.199 06:39:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:39.141 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:39.141 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:39.141 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.141 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:39.403 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.403 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:39.403 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.403 06:39:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:39.662 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.662 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:39.662 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.662 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.921 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.179 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.179 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.179 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.179 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.438 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.438 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:40.438 06:39:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:40.697 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:40.955 06:39:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:41.891 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:41.891 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:41.891 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:41.891 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.149 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.149 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:42.149 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.149 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.408 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.408 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.408 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.408 06:39:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.408 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.408 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.408 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.408 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:42.667 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.667 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:42.667 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.667 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:42.926 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.926 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:42.926 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.926 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.184 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.184 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:43.184 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:43.184 06:39:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:43.442 06:39:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.815 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:45.074 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.074 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:45.074 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:45.074 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.333 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.333 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:45.333 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.333 06:39:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:45.591 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.591 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:45.591 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.591 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159799 ']' 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159799' 00:33:45.852 killing process with pid 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159799 00:33:45.852 { 00:33:45.852 "results": [ 00:33:45.852 { 00:33:45.852 "job": "Nvme0n1", 00:33:45.852 "core_mask": "0x4", 00:33:45.852 "workload": "verify", 00:33:45.852 "status": "terminated", 00:33:45.852 "verify_range": { 00:33:45.852 "start": 0, 00:33:45.852 "length": 16384 00:33:45.852 }, 00:33:45.852 "queue_depth": 128, 00:33:45.852 "io_size": 4096, 00:33:45.852 "runtime": 28.924201, 00:33:45.852 "iops": 10769.355392046957, 00:33:45.852 "mibps": 42.067794500183425, 00:33:45.852 "io_failed": 0, 00:33:45.852 "io_timeout": 0, 00:33:45.852 "avg_latency_us": 11865.151417323064, 00:33:45.852 "min_latency_us": 257.46285714285716, 00:33:45.852 "max_latency_us": 3019898.88 00:33:45.852 } 00:33:45.852 ], 00:33:45.852 "core_count": 1 00:33:45.852 } 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1159799 00:33:45.852 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:45.852 [2024-12-13 06:39:06.998558] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:45.852 [2024-12-13 06:39:06.998612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1159799 ] 00:33:45.852 [2024-12-13 06:39:07.076085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:45.852 [2024-12-13 06:39:07.098274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:33:45.852 Running I/O for 90 seconds... 00:33:45.852 11362.00 IOPS, 44.38 MiB/s [2024-12-13T05:39:37.506Z] 11372.50 IOPS, 44.42 MiB/s [2024-12-13T05:39:37.506Z] 11507.67 IOPS, 44.95 MiB/s [2024-12-13T05:39:37.506Z] 11502.00 IOPS, 44.93 MiB/s [2024-12-13T05:39:37.506Z] 11548.80 IOPS, 45.11 MiB/s [2024-12-13T05:39:37.506Z] 11527.50 IOPS, 45.03 MiB/s [2024-12-13T05:39:37.506Z] 11536.43 IOPS, 45.06 MiB/s [2024-12-13T05:39:37.506Z] 11533.00 IOPS, 45.05 MiB/s [2024-12-13T05:39:37.506Z] 11539.78 IOPS, 45.08 MiB/s [2024-12-13T05:39:37.506Z] 11535.30 IOPS, 45.06 MiB/s [2024-12-13T05:39:37.506Z] 11532.27 IOPS, 45.05 MiB/s [2024-12-13T05:39:37.506Z] 11534.33 IOPS, 45.06 MiB/s [2024-12-13T05:39:37.506Z] [2024-12-13 06:39:21.168605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:3608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:3616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:3624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:3640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:3648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:3656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.168809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.168816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.169844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.169857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.169871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:3680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.852 [2024-12-13 06:39:21.169884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.852 [2024-12-13 06:39:21.169897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.169904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.169917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:3696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.169924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.169937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.169943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.169956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:3712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.169963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.169976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:3720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.169983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.169996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.853 [2024-12-13 06:39:21.170003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:2776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:2784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:2792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:2800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:2816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:2824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:2832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:2840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:2856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:2888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:2904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:2928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:2944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:2976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:2984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:3000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:3008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:3016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:33:45.853 [2024-12-13 06:39:21.170735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.853 [2024-12-13 06:39:21.170741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:3048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:3056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:3064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:3072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:3080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:3088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:3104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:3112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:3120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:3128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.170981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.170987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:3152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:3224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:3240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:3272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:3296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:3344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:3352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:33:45.854 [2024-12-13 06:39:21.171694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.854 [2024-12-13 06:39:21.171701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:3736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.171883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:3424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:3448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.171990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.171996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:3464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:3744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:3784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:3480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:3496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:3792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.855 [2024-12-13 06:39:21.172470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:3576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:45.855 [2024-12-13 06:39:21.172664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:3600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.855 [2024-12-13 06:39:21.172670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:45.855 11369.46 IOPS, 44.41 MiB/s [2024-12-13T05:39:37.509Z] 10557.36 IOPS, 41.24 MiB/s [2024-12-13T05:39:37.509Z] 9853.53 IOPS, 38.49 MiB/s [2024-12-13T05:39:37.509Z] 9379.44 IOPS, 36.64 MiB/s [2024-12-13T05:39:37.509Z] 9520.18 IOPS, 37.19 MiB/s [2024-12-13T05:39:37.509Z] 9625.50 IOPS, 37.60 MiB/s [2024-12-13T05:39:37.509Z] 9801.74 IOPS, 38.29 MiB/s [2024-12-13T05:39:37.509Z] 9994.35 IOPS, 39.04 MiB/s [2024-12-13T05:39:37.509Z] 10168.38 IOPS, 39.72 MiB/s [2024-12-13T05:39:37.509Z] 10236.32 IOPS, 39.99 MiB/s [2024-12-13T05:39:37.509Z] 10290.78 IOPS, 40.20 MiB/s [2024-12-13T05:39:37.510Z] 10354.88 IOPS, 40.45 MiB/s [2024-12-13T05:39:37.510Z] 10489.32 IOPS, 40.97 MiB/s [2024-12-13T05:39:37.510Z] 10619.96 IOPS, 41.48 MiB/s [2024-12-13T05:39:37.510Z] [2024-12-13 06:39:35.006016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:46784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:46800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:46816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:46880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.006837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.006844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:46928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:46944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:45.856 [2024-12-13 06:39:35.007810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.007848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.007995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.008020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:46264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.008040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.008059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.008079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:46368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:33:45.856 [2024-12-13 06:39:35.008098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.856 [2024-12-13 06:39:35.008105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:46344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:33:45.857 [2024-12-13 06:39:35.008543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.857 [2024-12-13 06:39:35.008550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:33:45.857 10702.11 IOPS, 41.81 MiB/s [2024-12-13T05:39:37.511Z] 10740.61 IOPS, 41.96 MiB/s [2024-12-13T05:39:37.511Z] Received shutdown signal, test time was about 28.924829 seconds 00:33:45.857 00:33:45.857 Latency(us) 00:33:45.857 [2024-12-13T05:39:37.511Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:45.857 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:33:45.857 Verification LBA range: start 0x0 length 0x4000 00:33:45.857 Nvme0n1 : 28.92 10769.36 42.07 0.00 0.00 11865.15 257.46 3019898.88 00:33:45.857 [2024-12-13T05:39:37.511Z] =================================================================================================================== 00:33:45.857 [2024-12-13T05:39:37.511Z] Total : 10769.36 42.07 0.00 0.00 11865.15 257.46 3019898.88 00:33:45.857 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:46.170 rmmod nvme_tcp 00:33:46.170 rmmod nvme_fabrics 00:33:46.170 rmmod nvme_keyring 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 1159512 ']' 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 1159512 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1159512 ']' 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1159512 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1159512 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1159512' 00:33:46.170 killing process with pid 1159512 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1159512 00:33:46.170 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1159512 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:46.483 06:39:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.409 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:48.409 00:33:48.409 real 0m40.776s 00:33:48.409 user 1m50.505s 00:33:48.409 sys 0m11.610s 00:33:48.409 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:48.409 06:39:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:48.409 ************************************ 00:33:48.409 END TEST nvmf_host_multipath_status 00:33:48.409 ************************************ 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:48.668 ************************************ 00:33:48.668 START TEST nvmf_discovery_remove_ifc 00:33:48.668 ************************************ 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:33:48.668 * Looking for test storage... 00:33:48.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:33:48.668 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:48.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.669 --rc genhtml_branch_coverage=1 00:33:48.669 --rc genhtml_function_coverage=1 00:33:48.669 --rc genhtml_legend=1 00:33:48.669 --rc geninfo_all_blocks=1 00:33:48.669 --rc geninfo_unexecuted_blocks=1 00:33:48.669 00:33:48.669 ' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:48.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.669 --rc genhtml_branch_coverage=1 00:33:48.669 --rc genhtml_function_coverage=1 00:33:48.669 --rc genhtml_legend=1 00:33:48.669 --rc geninfo_all_blocks=1 00:33:48.669 --rc geninfo_unexecuted_blocks=1 00:33:48.669 00:33:48.669 ' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:48.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.669 --rc genhtml_branch_coverage=1 00:33:48.669 --rc genhtml_function_coverage=1 00:33:48.669 --rc genhtml_legend=1 00:33:48.669 --rc geninfo_all_blocks=1 00:33:48.669 --rc geninfo_unexecuted_blocks=1 00:33:48.669 00:33:48.669 ' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:48.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:48.669 --rc genhtml_branch_coverage=1 00:33:48.669 --rc genhtml_function_coverage=1 00:33:48.669 --rc genhtml_legend=1 00:33:48.669 --rc geninfo_all_blocks=1 00:33:48.669 --rc geninfo_unexecuted_blocks=1 00:33:48.669 00:33:48.669 ' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:48.669 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:48.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:33:48.928 06:39:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:55.498 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:55.498 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:55.498 Found net devices under 0000:af:00.0: cvl_0_0 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:55.498 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:55.498 Found net devices under 0000:af:00.1: cvl_0_1 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:55.499 06:39:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:55.499 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:55.499 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:33:55.499 00:33:55.499 --- 10.0.0.2 ping statistics --- 00:33:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.499 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:55.499 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:55.499 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:33:55.499 00:33:55.499 --- 10.0.0.1 ping statistics --- 00:33:55.499 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:55.499 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=1168699 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 1168699 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168699 ']' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:55.499 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 [2024-12-13 06:39:46.227554] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:55.499 [2024-12-13 06:39:46.227596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:55.499 [2024-12-13 06:39:46.307941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.499 [2024-12-13 06:39:46.328810] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:55.499 [2024-12-13 06:39:46.328843] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:55.499 [2024-12-13 06:39:46.328851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:55.499 [2024-12-13 06:39:46.328857] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:55.499 [2024-12-13 06:39:46.328862] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:55.499 [2024-12-13 06:39:46.329341] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 [2024-12-13 06:39:46.468420] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:55.499 [2024-12-13 06:39:46.476582] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:55.499 null0 00:33:55.499 [2024-12-13 06:39:46.508569] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1168724 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1168724 /tmp/host.sock 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1168724 ']' 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:55.499 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 [2024-12-13 06:39:46.578774] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:33:55.499 [2024-12-13 06:39:46.578812] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1168724 ] 00:33:55.499 [2024-12-13 06:39:46.652622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.499 [2024-12-13 06:39:46.674624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.499 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:55.500 06:39:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.436 [2024-12-13 06:39:47.834665] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:56.436 [2024-12-13 06:39:47.834686] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:56.436 [2024-12-13 06:39:47.834697] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:56.436 [2024-12-13 06:39:47.961076] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:56.436 [2024-12-13 06:39:48.015626] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:56.436 [2024-12-13 06:39:48.016369] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x23d5710:1 started. 00:33:56.436 [2024-12-13 06:39:48.017652] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:33:56.436 [2024-12-13 06:39:48.017692] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:33:56.436 [2024-12-13 06:39:48.017711] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:33:56.436 [2024-12-13 06:39:48.017725] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:56.436 [2024-12-13 06:39:48.017742] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.436 [2024-12-13 06:39:48.064018] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x23d5710 was disconnected and freed. delete nvme_qpair. 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:33:56.436 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:56.695 06:39:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:57.632 06:39:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:59.008 06:39:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:33:59.943 06:39:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:00.880 06:39:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:01.817 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:01.817 [2024-12-13 06:39:53.459292] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:01.818 [2024-12-13 06:39:53.459332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.818 [2024-12-13 06:39:53.459344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.818 [2024-12-13 06:39:53.459354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.818 [2024-12-13 06:39:53.459363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.818 [2024-12-13 06:39:53.459371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.818 [2024-12-13 06:39:53.459378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.818 [2024-12-13 06:39:53.459386] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.818 [2024-12-13 06:39:53.459392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.818 [2024-12-13 06:39:53.459399] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:01.818 [2024-12-13 06:39:53.459406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:01.818 [2024-12-13 06:39:53.459413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1ec0 is same with the state(6) to be set 00:34:01.818 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:01.818 06:39:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:01.818 [2024-12-13 06:39:53.469314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1ec0 (9): Bad file descriptor 00:34:02.077 [2024-12-13 06:39:53.479349] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:02.077 [2024-12-13 06:39:53.479359] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:02.077 [2024-12-13 06:39:53.479365] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:02.077 [2024-12-13 06:39:53.479370] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:02.077 [2024-12-13 06:39:53.479389] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:03.014 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:03.014 [2024-12-13 06:39:54.502484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:03.014 [2024-12-13 06:39:54.502554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b1ec0 with addr=10.0.0.2, port=4420 00:34:03.014 [2024-12-13 06:39:54.502586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b1ec0 is same with the state(6) to be set 00:34:03.014 [2024-12-13 06:39:54.502640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b1ec0 (9): Bad file descriptor 00:34:03.014 [2024-12-13 06:39:54.503609] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:03.014 [2024-12-13 06:39:54.503671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:03.014 [2024-12-13 06:39:54.503695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:03.014 [2024-12-13 06:39:54.503718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:03.014 [2024-12-13 06:39:54.503738] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:03.014 [2024-12-13 06:39:54.503754] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:03.014 [2024-12-13 06:39:54.503767] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:03.015 [2024-12-13 06:39:54.503789] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:03.015 [2024-12-13 06:39:54.503804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:03.015 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.015 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:03.015 06:39:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:03.951 [2024-12-13 06:39:55.506314] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:03.951 [2024-12-13 06:39:55.506335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:03.951 [2024-12-13 06:39:55.506346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:03.951 [2024-12-13 06:39:55.506353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:03.951 [2024-12-13 06:39:55.506360] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:03.951 [2024-12-13 06:39:55.506366] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:03.951 [2024-12-13 06:39:55.506387] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:03.951 [2024-12-13 06:39:55.506391] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:03.951 [2024-12-13 06:39:55.506411] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:03.951 [2024-12-13 06:39:55.506433] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.951 [2024-12-13 06:39:55.506443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.951 [2024-12-13 06:39:55.506458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.951 [2024-12-13 06:39:55.506465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.951 [2024-12-13 06:39:55.506476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.951 [2024-12-13 06:39:55.506482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.951 [2024-12-13 06:39:55.506490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.951 [2024-12-13 06:39:55.506496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.951 [2024-12-13 06:39:55.506503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:03.951 [2024-12-13 06:39:55.506510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:03.951 [2024-12-13 06:39:55.506517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:03.951 [2024-12-13 06:39:55.506850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23a15e0 (9): Bad file descriptor 00:34:03.951 [2024-12-13 06:39:55.507861] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:03.951 [2024-12-13 06:39:55.507872] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:03.951 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:03.952 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:03.952 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:03.952 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:03.952 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.952 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:04.210 06:39:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:05.147 06:39:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:06.085 [2024-12-13 06:39:57.519340] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:06.085 [2024-12-13 06:39:57.519356] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:06.085 [2024-12-13 06:39:57.519368] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:06.085 [2024-12-13 06:39:57.646749] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:06.085 [2024-12-13 06:39:57.701236] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:06.085 [2024-12-13 06:39:57.701846] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x23accd0:1 started. 00:34:06.085 [2024-12-13 06:39:57.702837] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:06.085 [2024-12-13 06:39:57.702866] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:06.085 [2024-12-13 06:39:57.702883] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:06.085 [2024-12-13 06:39:57.702895] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:06.085 [2024-12-13 06:39:57.702902] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:06.085 [2024-12-13 06:39:57.708414] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x23accd0 was disconnected and freed. delete nvme_qpair. 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1168724 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168724 ']' 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168724 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168724 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168724' 00:34:06.345 killing process with pid 1168724 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168724 00:34:06.345 06:39:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168724 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:06.604 rmmod nvme_tcp 00:34:06.604 rmmod nvme_fabrics 00:34:06.604 rmmod nvme_keyring 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 1168699 ']' 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 1168699 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1168699 ']' 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1168699 00:34:06.604 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1168699 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1168699' 00:34:06.605 killing process with pid 1168699 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1168699 00:34:06.605 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1168699 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:06.865 06:39:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:08.772 00:34:08.772 real 0m20.233s 00:34:08.772 user 0m24.482s 00:34:08.772 sys 0m5.696s 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:08.772 ************************************ 00:34:08.772 END TEST nvmf_discovery_remove_ifc 00:34:08.772 ************************************ 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:08.772 ************************************ 00:34:08.772 START TEST nvmf_identify_kernel_target 00:34:08.772 ************************************ 00:34:08.772 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:09.042 * Looking for test storage... 00:34:09.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:09.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.042 --rc genhtml_branch_coverage=1 00:34:09.042 --rc genhtml_function_coverage=1 00:34:09.042 --rc genhtml_legend=1 00:34:09.042 --rc geninfo_all_blocks=1 00:34:09.042 --rc geninfo_unexecuted_blocks=1 00:34:09.042 00:34:09.042 ' 00:34:09.042 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.043 --rc genhtml_branch_coverage=1 00:34:09.043 --rc genhtml_function_coverage=1 00:34:09.043 --rc genhtml_legend=1 00:34:09.043 --rc geninfo_all_blocks=1 00:34:09.043 --rc geninfo_unexecuted_blocks=1 00:34:09.043 00:34:09.043 ' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.043 --rc genhtml_branch_coverage=1 00:34:09.043 --rc genhtml_function_coverage=1 00:34:09.043 --rc genhtml_legend=1 00:34:09.043 --rc geninfo_all_blocks=1 00:34:09.043 --rc geninfo_unexecuted_blocks=1 00:34:09.043 00:34:09.043 ' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:09.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:09.043 --rc genhtml_branch_coverage=1 00:34:09.043 --rc genhtml_function_coverage=1 00:34:09.043 --rc genhtml_legend=1 00:34:09.043 --rc geninfo_all_blocks=1 00:34:09.043 --rc geninfo_unexecuted_blocks=1 00:34:09.043 00:34:09.043 ' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:09.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:09.043 06:40:00 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:15.619 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:15.619 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:15.619 Found net devices under 0000:af:00.0: cvl_0_0 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:15.619 Found net devices under 0000:af:00.1: cvl_0_1 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:15.619 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:15.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:15.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:34:15.620 00:34:15.620 --- 10.0.0.2 ping statistics --- 00:34:15.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.620 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:15.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:15.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.237 ms 00:34:15.620 00:34:15.620 --- 10.0.0.1 ping statistics --- 00:34:15.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:15.620 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:15.620 06:40:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:17.526 Waiting for block devices as requested 00:34:17.785 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:17.785 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:17.785 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.045 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.045 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.045 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:18.305 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:18.305 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:18.305 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:18.564 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:18.564 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:18.564 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:18.564 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:18.824 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:18.824 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:18.824 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:19.083 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:19.083 No valid GPT data, bailing 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:19.083 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:19.084 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:19.344 00:34:19.344 Discovery Log Number of Records 2, Generation counter 2 00:34:19.344 =====Discovery Log Entry 0====== 00:34:19.344 trtype: tcp 00:34:19.344 adrfam: ipv4 00:34:19.344 subtype: current discovery subsystem 00:34:19.344 treq: not specified, sq flow control disable supported 00:34:19.344 portid: 1 00:34:19.344 trsvcid: 4420 00:34:19.344 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:19.344 traddr: 10.0.0.1 00:34:19.344 eflags: none 00:34:19.344 sectype: none 00:34:19.344 =====Discovery Log Entry 1====== 00:34:19.344 trtype: tcp 00:34:19.344 adrfam: ipv4 00:34:19.344 subtype: nvme subsystem 00:34:19.344 treq: not specified, sq flow control disable supported 00:34:19.344 portid: 1 00:34:19.344 trsvcid: 4420 00:34:19.344 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:19.344 traddr: 10.0.0.1 00:34:19.344 eflags: none 00:34:19.344 sectype: none 00:34:19.344 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:19.344 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:19.344 ===================================================== 00:34:19.344 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:19.344 ===================================================== 00:34:19.344 Controller Capabilities/Features 00:34:19.344 ================================ 00:34:19.344 Vendor ID: 0000 00:34:19.344 Subsystem Vendor ID: 0000 00:34:19.345 Serial Number: 11dabcc4425c9de9119e 00:34:19.345 Model Number: Linux 00:34:19.345 Firmware Version: 6.8.9-20 00:34:19.345 Recommended Arb Burst: 0 00:34:19.345 IEEE OUI Identifier: 00 00 00 00:34:19.345 Multi-path I/O 00:34:19.345 May have multiple subsystem ports: No 00:34:19.345 May have multiple controllers: No 00:34:19.345 Associated with SR-IOV VF: No 00:34:19.345 Max Data Transfer Size: Unlimited 00:34:19.345 Max Number of Namespaces: 0 00:34:19.345 Max Number of I/O Queues: 1024 00:34:19.345 NVMe Specification Version (VS): 1.3 00:34:19.345 NVMe Specification Version (Identify): 1.3 00:34:19.345 Maximum Queue Entries: 1024 00:34:19.345 Contiguous Queues Required: No 00:34:19.345 Arbitration Mechanisms Supported 00:34:19.345 Weighted Round Robin: Not Supported 00:34:19.345 Vendor Specific: Not Supported 00:34:19.345 Reset Timeout: 7500 ms 00:34:19.345 Doorbell Stride: 4 bytes 00:34:19.345 NVM Subsystem Reset: Not Supported 00:34:19.345 Command Sets Supported 00:34:19.345 NVM Command Set: Supported 00:34:19.345 Boot Partition: Not Supported 00:34:19.345 Memory Page Size Minimum: 4096 bytes 00:34:19.345 Memory Page Size Maximum: 4096 bytes 00:34:19.345 Persistent Memory Region: Not Supported 00:34:19.345 Optional Asynchronous Events Supported 00:34:19.345 Namespace Attribute Notices: Not Supported 00:34:19.345 Firmware Activation Notices: Not Supported 00:34:19.345 ANA Change Notices: Not Supported 00:34:19.345 PLE Aggregate Log Change Notices: Not Supported 00:34:19.345 LBA Status Info Alert Notices: Not Supported 00:34:19.345 EGE Aggregate Log Change Notices: Not Supported 00:34:19.345 Normal NVM Subsystem Shutdown event: Not Supported 00:34:19.345 Zone Descriptor Change Notices: Not Supported 00:34:19.345 Discovery Log Change Notices: Supported 00:34:19.345 Controller Attributes 00:34:19.345 128-bit Host Identifier: Not Supported 00:34:19.345 Non-Operational Permissive Mode: Not Supported 00:34:19.345 NVM Sets: Not Supported 00:34:19.345 Read Recovery Levels: Not Supported 00:34:19.345 Endurance Groups: Not Supported 00:34:19.345 Predictable Latency Mode: Not Supported 00:34:19.345 Traffic Based Keep ALive: Not Supported 00:34:19.345 Namespace Granularity: Not Supported 00:34:19.345 SQ Associations: Not Supported 00:34:19.345 UUID List: Not Supported 00:34:19.345 Multi-Domain Subsystem: Not Supported 00:34:19.345 Fixed Capacity Management: Not Supported 00:34:19.345 Variable Capacity Management: Not Supported 00:34:19.345 Delete Endurance Group: Not Supported 00:34:19.345 Delete NVM Set: Not Supported 00:34:19.345 Extended LBA Formats Supported: Not Supported 00:34:19.345 Flexible Data Placement Supported: Not Supported 00:34:19.345 00:34:19.345 Controller Memory Buffer Support 00:34:19.345 ================================ 00:34:19.345 Supported: No 00:34:19.345 00:34:19.345 Persistent Memory Region Support 00:34:19.345 ================================ 00:34:19.345 Supported: No 00:34:19.345 00:34:19.345 Admin Command Set Attributes 00:34:19.345 ============================ 00:34:19.345 Security Send/Receive: Not Supported 00:34:19.345 Format NVM: Not Supported 00:34:19.345 Firmware Activate/Download: Not Supported 00:34:19.345 Namespace Management: Not Supported 00:34:19.345 Device Self-Test: Not Supported 00:34:19.345 Directives: Not Supported 00:34:19.345 NVMe-MI: Not Supported 00:34:19.345 Virtualization Management: Not Supported 00:34:19.345 Doorbell Buffer Config: Not Supported 00:34:19.345 Get LBA Status Capability: Not Supported 00:34:19.345 Command & Feature Lockdown Capability: Not Supported 00:34:19.345 Abort Command Limit: 1 00:34:19.345 Async Event Request Limit: 1 00:34:19.345 Number of Firmware Slots: N/A 00:34:19.345 Firmware Slot 1 Read-Only: N/A 00:34:19.345 Firmware Activation Without Reset: N/A 00:34:19.345 Multiple Update Detection Support: N/A 00:34:19.345 Firmware Update Granularity: No Information Provided 00:34:19.345 Per-Namespace SMART Log: No 00:34:19.345 Asymmetric Namespace Access Log Page: Not Supported 00:34:19.345 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:19.345 Command Effects Log Page: Not Supported 00:34:19.345 Get Log Page Extended Data: Supported 00:34:19.345 Telemetry Log Pages: Not Supported 00:34:19.345 Persistent Event Log Pages: Not Supported 00:34:19.345 Supported Log Pages Log Page: May Support 00:34:19.345 Commands Supported & Effects Log Page: Not Supported 00:34:19.345 Feature Identifiers & Effects Log Page:May Support 00:34:19.345 NVMe-MI Commands & Effects Log Page: May Support 00:34:19.345 Data Area 4 for Telemetry Log: Not Supported 00:34:19.345 Error Log Page Entries Supported: 1 00:34:19.345 Keep Alive: Not Supported 00:34:19.345 00:34:19.345 NVM Command Set Attributes 00:34:19.345 ========================== 00:34:19.345 Submission Queue Entry Size 00:34:19.345 Max: 1 00:34:19.345 Min: 1 00:34:19.345 Completion Queue Entry Size 00:34:19.345 Max: 1 00:34:19.345 Min: 1 00:34:19.345 Number of Namespaces: 0 00:34:19.345 Compare Command: Not Supported 00:34:19.345 Write Uncorrectable Command: Not Supported 00:34:19.345 Dataset Management Command: Not Supported 00:34:19.345 Write Zeroes Command: Not Supported 00:34:19.345 Set Features Save Field: Not Supported 00:34:19.345 Reservations: Not Supported 00:34:19.345 Timestamp: Not Supported 00:34:19.345 Copy: Not Supported 00:34:19.345 Volatile Write Cache: Not Present 00:34:19.345 Atomic Write Unit (Normal): 1 00:34:19.345 Atomic Write Unit (PFail): 1 00:34:19.345 Atomic Compare & Write Unit: 1 00:34:19.345 Fused Compare & Write: Not Supported 00:34:19.345 Scatter-Gather List 00:34:19.345 SGL Command Set: Supported 00:34:19.345 SGL Keyed: Not Supported 00:34:19.345 SGL Bit Bucket Descriptor: Not Supported 00:34:19.345 SGL Metadata Pointer: Not Supported 00:34:19.345 Oversized SGL: Not Supported 00:34:19.345 SGL Metadata Address: Not Supported 00:34:19.345 SGL Offset: Supported 00:34:19.345 Transport SGL Data Block: Not Supported 00:34:19.345 Replay Protected Memory Block: Not Supported 00:34:19.345 00:34:19.345 Firmware Slot Information 00:34:19.345 ========================= 00:34:19.345 Active slot: 0 00:34:19.345 00:34:19.345 00:34:19.345 Error Log 00:34:19.345 ========= 00:34:19.345 00:34:19.345 Active Namespaces 00:34:19.345 ================= 00:34:19.345 Discovery Log Page 00:34:19.345 ================== 00:34:19.345 Generation Counter: 2 00:34:19.345 Number of Records: 2 00:34:19.345 Record Format: 0 00:34:19.345 00:34:19.345 Discovery Log Entry 0 00:34:19.345 ---------------------- 00:34:19.345 Transport Type: 3 (TCP) 00:34:19.345 Address Family: 1 (IPv4) 00:34:19.345 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:19.345 Entry Flags: 00:34:19.345 Duplicate Returned Information: 0 00:34:19.345 Explicit Persistent Connection Support for Discovery: 0 00:34:19.345 Transport Requirements: 00:34:19.345 Secure Channel: Not Specified 00:34:19.345 Port ID: 1 (0x0001) 00:34:19.346 Controller ID: 65535 (0xffff) 00:34:19.346 Admin Max SQ Size: 32 00:34:19.346 Transport Service Identifier: 4420 00:34:19.346 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:19.346 Transport Address: 10.0.0.1 00:34:19.346 Discovery Log Entry 1 00:34:19.346 ---------------------- 00:34:19.346 Transport Type: 3 (TCP) 00:34:19.346 Address Family: 1 (IPv4) 00:34:19.346 Subsystem Type: 2 (NVM Subsystem) 00:34:19.346 Entry Flags: 00:34:19.346 Duplicate Returned Information: 0 00:34:19.346 Explicit Persistent Connection Support for Discovery: 0 00:34:19.346 Transport Requirements: 00:34:19.346 Secure Channel: Not Specified 00:34:19.346 Port ID: 1 (0x0001) 00:34:19.346 Controller ID: 65535 (0xffff) 00:34:19.346 Admin Max SQ Size: 32 00:34:19.346 Transport Service Identifier: 4420 00:34:19.346 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:19.346 Transport Address: 10.0.0.1 00:34:19.346 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:19.346 get_feature(0x01) failed 00:34:19.346 get_feature(0x02) failed 00:34:19.346 get_feature(0x04) failed 00:34:19.346 ===================================================== 00:34:19.346 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:19.346 ===================================================== 00:34:19.346 Controller Capabilities/Features 00:34:19.346 ================================ 00:34:19.346 Vendor ID: 0000 00:34:19.346 Subsystem Vendor ID: 0000 00:34:19.346 Serial Number: 2a56d2791c7b8b19f9a4 00:34:19.346 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:19.346 Firmware Version: 6.8.9-20 00:34:19.346 Recommended Arb Burst: 6 00:34:19.346 IEEE OUI Identifier: 00 00 00 00:34:19.346 Multi-path I/O 00:34:19.346 May have multiple subsystem ports: Yes 00:34:19.346 May have multiple controllers: Yes 00:34:19.346 Associated with SR-IOV VF: No 00:34:19.346 Max Data Transfer Size: Unlimited 00:34:19.346 Max Number of Namespaces: 1024 00:34:19.346 Max Number of I/O Queues: 128 00:34:19.346 NVMe Specification Version (VS): 1.3 00:34:19.346 NVMe Specification Version (Identify): 1.3 00:34:19.346 Maximum Queue Entries: 1024 00:34:19.346 Contiguous Queues Required: No 00:34:19.346 Arbitration Mechanisms Supported 00:34:19.346 Weighted Round Robin: Not Supported 00:34:19.346 Vendor Specific: Not Supported 00:34:19.346 Reset Timeout: 7500 ms 00:34:19.346 Doorbell Stride: 4 bytes 00:34:19.346 NVM Subsystem Reset: Not Supported 00:34:19.346 Command Sets Supported 00:34:19.346 NVM Command Set: Supported 00:34:19.346 Boot Partition: Not Supported 00:34:19.346 Memory Page Size Minimum: 4096 bytes 00:34:19.346 Memory Page Size Maximum: 4096 bytes 00:34:19.346 Persistent Memory Region: Not Supported 00:34:19.346 Optional Asynchronous Events Supported 00:34:19.346 Namespace Attribute Notices: Supported 00:34:19.346 Firmware Activation Notices: Not Supported 00:34:19.346 ANA Change Notices: Supported 00:34:19.346 PLE Aggregate Log Change Notices: Not Supported 00:34:19.346 LBA Status Info Alert Notices: Not Supported 00:34:19.346 EGE Aggregate Log Change Notices: Not Supported 00:34:19.346 Normal NVM Subsystem Shutdown event: Not Supported 00:34:19.346 Zone Descriptor Change Notices: Not Supported 00:34:19.346 Discovery Log Change Notices: Not Supported 00:34:19.346 Controller Attributes 00:34:19.346 128-bit Host Identifier: Supported 00:34:19.346 Non-Operational Permissive Mode: Not Supported 00:34:19.346 NVM Sets: Not Supported 00:34:19.346 Read Recovery Levels: Not Supported 00:34:19.346 Endurance Groups: Not Supported 00:34:19.346 Predictable Latency Mode: Not Supported 00:34:19.346 Traffic Based Keep ALive: Supported 00:34:19.346 Namespace Granularity: Not Supported 00:34:19.346 SQ Associations: Not Supported 00:34:19.346 UUID List: Not Supported 00:34:19.346 Multi-Domain Subsystem: Not Supported 00:34:19.346 Fixed Capacity Management: Not Supported 00:34:19.346 Variable Capacity Management: Not Supported 00:34:19.346 Delete Endurance Group: Not Supported 00:34:19.346 Delete NVM Set: Not Supported 00:34:19.346 Extended LBA Formats Supported: Not Supported 00:34:19.346 Flexible Data Placement Supported: Not Supported 00:34:19.346 00:34:19.346 Controller Memory Buffer Support 00:34:19.346 ================================ 00:34:19.346 Supported: No 00:34:19.346 00:34:19.346 Persistent Memory Region Support 00:34:19.346 ================================ 00:34:19.346 Supported: No 00:34:19.346 00:34:19.346 Admin Command Set Attributes 00:34:19.346 ============================ 00:34:19.346 Security Send/Receive: Not Supported 00:34:19.346 Format NVM: Not Supported 00:34:19.346 Firmware Activate/Download: Not Supported 00:34:19.346 Namespace Management: Not Supported 00:34:19.346 Device Self-Test: Not Supported 00:34:19.346 Directives: Not Supported 00:34:19.346 NVMe-MI: Not Supported 00:34:19.346 Virtualization Management: Not Supported 00:34:19.346 Doorbell Buffer Config: Not Supported 00:34:19.346 Get LBA Status Capability: Not Supported 00:34:19.346 Command & Feature Lockdown Capability: Not Supported 00:34:19.346 Abort Command Limit: 4 00:34:19.346 Async Event Request Limit: 4 00:34:19.346 Number of Firmware Slots: N/A 00:34:19.346 Firmware Slot 1 Read-Only: N/A 00:34:19.346 Firmware Activation Without Reset: N/A 00:34:19.346 Multiple Update Detection Support: N/A 00:34:19.346 Firmware Update Granularity: No Information Provided 00:34:19.346 Per-Namespace SMART Log: Yes 00:34:19.346 Asymmetric Namespace Access Log Page: Supported 00:34:19.346 ANA Transition Time : 10 sec 00:34:19.346 00:34:19.346 Asymmetric Namespace Access Capabilities 00:34:19.346 ANA Optimized State : Supported 00:34:19.346 ANA Non-Optimized State : Supported 00:34:19.346 ANA Inaccessible State : Supported 00:34:19.346 ANA Persistent Loss State : Supported 00:34:19.346 ANA Change State : Supported 00:34:19.346 ANAGRPID is not changed : No 00:34:19.346 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:19.346 00:34:19.346 ANA Group Identifier Maximum : 128 00:34:19.346 Number of ANA Group Identifiers : 128 00:34:19.346 Max Number of Allowed Namespaces : 1024 00:34:19.346 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:19.346 Command Effects Log Page: Supported 00:34:19.346 Get Log Page Extended Data: Supported 00:34:19.346 Telemetry Log Pages: Not Supported 00:34:19.346 Persistent Event Log Pages: Not Supported 00:34:19.346 Supported Log Pages Log Page: May Support 00:34:19.346 Commands Supported & Effects Log Page: Not Supported 00:34:19.346 Feature Identifiers & Effects Log Page:May Support 00:34:19.346 NVMe-MI Commands & Effects Log Page: May Support 00:34:19.346 Data Area 4 for Telemetry Log: Not Supported 00:34:19.346 Error Log Page Entries Supported: 128 00:34:19.346 Keep Alive: Supported 00:34:19.346 Keep Alive Granularity: 1000 ms 00:34:19.346 00:34:19.346 NVM Command Set Attributes 00:34:19.346 ========================== 00:34:19.346 Submission Queue Entry Size 00:34:19.346 Max: 64 00:34:19.346 Min: 64 00:34:19.346 Completion Queue Entry Size 00:34:19.346 Max: 16 00:34:19.346 Min: 16 00:34:19.346 Number of Namespaces: 1024 00:34:19.346 Compare Command: Not Supported 00:34:19.346 Write Uncorrectable Command: Not Supported 00:34:19.346 Dataset Management Command: Supported 00:34:19.346 Write Zeroes Command: Supported 00:34:19.347 Set Features Save Field: Not Supported 00:34:19.347 Reservations: Not Supported 00:34:19.347 Timestamp: Not Supported 00:34:19.347 Copy: Not Supported 00:34:19.347 Volatile Write Cache: Present 00:34:19.347 Atomic Write Unit (Normal): 1 00:34:19.347 Atomic Write Unit (PFail): 1 00:34:19.347 Atomic Compare & Write Unit: 1 00:34:19.347 Fused Compare & Write: Not Supported 00:34:19.347 Scatter-Gather List 00:34:19.347 SGL Command Set: Supported 00:34:19.347 SGL Keyed: Not Supported 00:34:19.347 SGL Bit Bucket Descriptor: Not Supported 00:34:19.347 SGL Metadata Pointer: Not Supported 00:34:19.347 Oversized SGL: Not Supported 00:34:19.347 SGL Metadata Address: Not Supported 00:34:19.347 SGL Offset: Supported 00:34:19.347 Transport SGL Data Block: Not Supported 00:34:19.347 Replay Protected Memory Block: Not Supported 00:34:19.347 00:34:19.347 Firmware Slot Information 00:34:19.347 ========================= 00:34:19.347 Active slot: 0 00:34:19.347 00:34:19.347 Asymmetric Namespace Access 00:34:19.347 =========================== 00:34:19.347 Change Count : 0 00:34:19.347 Number of ANA Group Descriptors : 1 00:34:19.347 ANA Group Descriptor : 0 00:34:19.347 ANA Group ID : 1 00:34:19.347 Number of NSID Values : 1 00:34:19.347 Change Count : 0 00:34:19.347 ANA State : 1 00:34:19.347 Namespace Identifier : 1 00:34:19.347 00:34:19.347 Commands Supported and Effects 00:34:19.347 ============================== 00:34:19.347 Admin Commands 00:34:19.347 -------------- 00:34:19.347 Get Log Page (02h): Supported 00:34:19.347 Identify (06h): Supported 00:34:19.347 Abort (08h): Supported 00:34:19.347 Set Features (09h): Supported 00:34:19.347 Get Features (0Ah): Supported 00:34:19.347 Asynchronous Event Request (0Ch): Supported 00:34:19.347 Keep Alive (18h): Supported 00:34:19.347 I/O Commands 00:34:19.347 ------------ 00:34:19.347 Flush (00h): Supported 00:34:19.347 Write (01h): Supported LBA-Change 00:34:19.347 Read (02h): Supported 00:34:19.347 Write Zeroes (08h): Supported LBA-Change 00:34:19.347 Dataset Management (09h): Supported 00:34:19.347 00:34:19.347 Error Log 00:34:19.347 ========= 00:34:19.347 Entry: 0 00:34:19.347 Error Count: 0x3 00:34:19.347 Submission Queue Id: 0x0 00:34:19.347 Command Id: 0x5 00:34:19.347 Phase Bit: 0 00:34:19.347 Status Code: 0x2 00:34:19.347 Status Code Type: 0x0 00:34:19.347 Do Not Retry: 1 00:34:19.347 Error Location: 0x28 00:34:19.347 LBA: 0x0 00:34:19.347 Namespace: 0x0 00:34:19.347 Vendor Log Page: 0x0 00:34:19.347 ----------- 00:34:19.347 Entry: 1 00:34:19.347 Error Count: 0x2 00:34:19.347 Submission Queue Id: 0x0 00:34:19.347 Command Id: 0x5 00:34:19.347 Phase Bit: 0 00:34:19.347 Status Code: 0x2 00:34:19.347 Status Code Type: 0x0 00:34:19.347 Do Not Retry: 1 00:34:19.347 Error Location: 0x28 00:34:19.347 LBA: 0x0 00:34:19.347 Namespace: 0x0 00:34:19.347 Vendor Log Page: 0x0 00:34:19.347 ----------- 00:34:19.347 Entry: 2 00:34:19.347 Error Count: 0x1 00:34:19.347 Submission Queue Id: 0x0 00:34:19.347 Command Id: 0x4 00:34:19.347 Phase Bit: 0 00:34:19.347 Status Code: 0x2 00:34:19.347 Status Code Type: 0x0 00:34:19.347 Do Not Retry: 1 00:34:19.347 Error Location: 0x28 00:34:19.347 LBA: 0x0 00:34:19.347 Namespace: 0x0 00:34:19.347 Vendor Log Page: 0x0 00:34:19.347 00:34:19.347 Number of Queues 00:34:19.347 ================ 00:34:19.347 Number of I/O Submission Queues: 128 00:34:19.347 Number of I/O Completion Queues: 128 00:34:19.347 00:34:19.347 ZNS Specific Controller Data 00:34:19.347 ============================ 00:34:19.347 Zone Append Size Limit: 0 00:34:19.347 00:34:19.347 00:34:19.347 Active Namespaces 00:34:19.347 ================= 00:34:19.347 get_feature(0x05) failed 00:34:19.347 Namespace ID:1 00:34:19.347 Command Set Identifier: NVM (00h) 00:34:19.347 Deallocate: Supported 00:34:19.347 Deallocated/Unwritten Error: Not Supported 00:34:19.347 Deallocated Read Value: Unknown 00:34:19.347 Deallocate in Write Zeroes: Not Supported 00:34:19.347 Deallocated Guard Field: 0xFFFF 00:34:19.347 Flush: Supported 00:34:19.347 Reservation: Not Supported 00:34:19.347 Namespace Sharing Capabilities: Multiple Controllers 00:34:19.347 Size (in LBAs): 1953525168 (931GiB) 00:34:19.347 Capacity (in LBAs): 1953525168 (931GiB) 00:34:19.347 Utilization (in LBAs): 1953525168 (931GiB) 00:34:19.347 UUID: 0dd6e9f0-86eb-420e-b8eb-56befa18f770 00:34:19.347 Thin Provisioning: Not Supported 00:34:19.347 Per-NS Atomic Units: Yes 00:34:19.347 Atomic Boundary Size (Normal): 0 00:34:19.347 Atomic Boundary Size (PFail): 0 00:34:19.347 Atomic Boundary Offset: 0 00:34:19.347 NGUID/EUI64 Never Reused: No 00:34:19.347 ANA group ID: 1 00:34:19.347 Namespace Write Protected: No 00:34:19.347 Number of LBA Formats: 1 00:34:19.347 Current LBA Format: LBA Format #00 00:34:19.347 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:19.347 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:19.347 rmmod nvme_tcp 00:34:19.347 rmmod nvme_fabrics 00:34:19.347 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:19.607 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:19.607 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:19.607 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:19.607 06:40:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:19.607 06:40:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:21.514 06:40:13 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:24.806 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:24.806 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.375 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:25.375 00:34:25.375 real 0m16.548s 00:34:25.375 user 0m4.244s 00:34:25.375 sys 0m8.687s 00:34:25.375 06:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:25.375 06:40:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:25.375 ************************************ 00:34:25.375 END TEST nvmf_identify_kernel_target 00:34:25.375 ************************************ 00:34:25.375 06:40:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:25.375 06:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:25.375 06:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:25.375 06:40:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:25.635 ************************************ 00:34:25.635 START TEST nvmf_auth_host 00:34:25.635 ************************************ 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:25.635 * Looking for test storage... 00:34:25.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:25.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.635 --rc genhtml_branch_coverage=1 00:34:25.635 --rc genhtml_function_coverage=1 00:34:25.635 --rc genhtml_legend=1 00:34:25.635 --rc geninfo_all_blocks=1 00:34:25.635 --rc geninfo_unexecuted_blocks=1 00:34:25.635 00:34:25.635 ' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:25.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.635 --rc genhtml_branch_coverage=1 00:34:25.635 --rc genhtml_function_coverage=1 00:34:25.635 --rc genhtml_legend=1 00:34:25.635 --rc geninfo_all_blocks=1 00:34:25.635 --rc geninfo_unexecuted_blocks=1 00:34:25.635 00:34:25.635 ' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:25.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.635 --rc genhtml_branch_coverage=1 00:34:25.635 --rc genhtml_function_coverage=1 00:34:25.635 --rc genhtml_legend=1 00:34:25.635 --rc geninfo_all_blocks=1 00:34:25.635 --rc geninfo_unexecuted_blocks=1 00:34:25.635 00:34:25.635 ' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:25.635 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:25.635 --rc genhtml_branch_coverage=1 00:34:25.635 --rc genhtml_function_coverage=1 00:34:25.635 --rc genhtml_legend=1 00:34:25.635 --rc geninfo_all_blocks=1 00:34:25.635 --rc geninfo_unexecuted_blocks=1 00:34:25.635 00:34:25.635 ' 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:25.635 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:25.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:25.636 06:40:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:32.210 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:32.210 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:32.210 Found net devices under 0000:af:00.0: cvl_0_0 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:32.210 Found net devices under 0000:af:00.1: cvl_0_1 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.210 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.211 06:40:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.377 ms 00:34:32.211 00:34:32.211 --- 10.0.0.2 ping statistics --- 00:34:32.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.211 rtt min/avg/max/mdev = 0.377/0.377/0.377/0.000 ms 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:34:32.211 00:34:32.211 --- 10.0.0.1 ping statistics --- 00:34:32.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.211 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=1180471 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 1180471 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180471 ']' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=0cd2165e576c09aeacc8cd9658ff08be 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Xm5 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 0cd2165e576c09aeacc8cd9658ff08be 0 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 0cd2165e576c09aeacc8cd9658ff08be 0 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=0cd2165e576c09aeacc8cd9658ff08be 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Xm5 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Xm5 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Xm5 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=00b2f2178c57dc292e80901079fe8e9e81ae0094a2fd083d29b3744d49dc45e1 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.4RK 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 00b2f2178c57dc292e80901079fe8e9e81ae0094a2fd083d29b3744d49dc45e1 3 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 00b2f2178c57dc292e80901079fe8e9e81ae0094a2fd083d29b3744d49dc45e1 3 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=00b2f2178c57dc292e80901079fe8e9e81ae0094a2fd083d29b3744d49dc45e1 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.4RK 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.4RK 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.4RK 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=8bf0117c0a3ba0bc7ed0fd54b29b80c2658e0005f25e3303 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9Qu 00:34:32.211 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 8bf0117c0a3ba0bc7ed0fd54b29b80c2658e0005f25e3303 0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 8bf0117c0a3ba0bc7ed0fd54b29b80c2658e0005f25e3303 0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=8bf0117c0a3ba0bc7ed0fd54b29b80c2658e0005f25e3303 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9Qu 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9Qu 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9Qu 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=43310a34f5dc8aebf050dec496868aaeaec51d4be9fc68d0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uDg 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 43310a34f5dc8aebf050dec496868aaeaec51d4be9fc68d0 2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 43310a34f5dc8aebf050dec496868aaeaec51d4be9fc68d0 2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=43310a34f5dc8aebf050dec496868aaeaec51d4be9fc68d0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uDg 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uDg 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.uDg 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=628a043a525c3f7d15b6412a5d4fe222 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ZEs 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 628a043a525c3f7d15b6412a5d4fe222 1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 628a043a525c3f7d15b6412a5d4fe222 1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=628a043a525c3f7d15b6412a5d4fe222 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ZEs 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ZEs 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ZEs 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1a7d175bb6487bbf5d2d85997da15441 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.FKi 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1a7d175bb6487bbf5d2d85997da15441 1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1a7d175bb6487bbf5d2d85997da15441 1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1a7d175bb6487bbf5d2d85997da15441 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.FKi 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.FKi 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.FKi 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2cdda83eaa021e0c254868d43a269b22a96147497f4f1276 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.VvY 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2cdda83eaa021e0c254868d43a269b22a96147497f4f1276 2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2cdda83eaa021e0c254868d43a269b22a96147497f4f1276 2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2cdda83eaa021e0c254868d43a269b22a96147497f4f1276 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.VvY 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.VvY 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.VvY 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=209000112391f225169ff84a46f62754 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.ppD 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 209000112391f225169ff84a46f62754 0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 209000112391f225169ff84a46f62754 0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=209000112391f225169ff84a46f62754 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.ppD 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.ppD 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.ppD 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:32.212 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=25a572a731af528ce0f218190fb042f39617f49c7650fbcc8eb4877d47ace01e 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QXq 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 25a572a731af528ce0f218190fb042f39617f49c7650fbcc8eb4877d47ace01e 3 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 25a572a731af528ce0f218190fb042f39617f49c7650fbcc8eb4877d47ace01e 3 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=25a572a731af528ce0f218190fb042f39617f49c7650fbcc8eb4877d47ace01e 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:32.213 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QXq 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QXq 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.QXq 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1180471 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1180471 ']' 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:32.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:32.472 06:40:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xm5 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.4RK ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.4RK 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9Qu 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.uDg ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uDg 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ZEs 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.472 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.FKi ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.FKi 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.VvY 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.ppD ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.ppD 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.QXq 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:32.731 06:40:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:35.266 Waiting for block devices as requested 00:34:35.266 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:35.525 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.525 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:35.525 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:35.783 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:35.783 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:35.783 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:35.783 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:36.042 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:36.042 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:36.042 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:36.042 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:36.300 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:36.300 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:36.300 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:36.300 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:36.559 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:37.127 No valid GPT data, bailing 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:37.127 00:34:37.127 Discovery Log Number of Records 2, Generation counter 2 00:34:37.127 =====Discovery Log Entry 0====== 00:34:37.127 trtype: tcp 00:34:37.127 adrfam: ipv4 00:34:37.127 subtype: current discovery subsystem 00:34:37.127 treq: not specified, sq flow control disable supported 00:34:37.127 portid: 1 00:34:37.127 trsvcid: 4420 00:34:37.127 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:37.127 traddr: 10.0.0.1 00:34:37.127 eflags: none 00:34:37.127 sectype: none 00:34:37.127 =====Discovery Log Entry 1====== 00:34:37.127 trtype: tcp 00:34:37.127 adrfam: ipv4 00:34:37.127 subtype: nvme subsystem 00:34:37.127 treq: not specified, sq flow control disable supported 00:34:37.127 portid: 1 00:34:37.127 trsvcid: 4420 00:34:37.127 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:37.127 traddr: 10.0.0.1 00:34:37.127 eflags: none 00:34:37.127 sectype: none 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.127 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.387 nvme0n1 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.387 06:40:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.387 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.388 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:37.388 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.388 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.647 nvme0n1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.647 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.906 nvme0n1 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:37.906 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:37.907 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.166 nvme0n1 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.166 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 nvme0n1 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.425 06:40:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 nvme0n1 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.425 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.685 nvme0n1 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.685 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:38.944 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.945 nvme0n1 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:38.945 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.203 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.204 nvme0n1 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.204 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.462 06:40:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.462 nvme0n1 00:34:39.462 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.462 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.462 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.463 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.463 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.463 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 nvme0n1 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.721 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:39.979 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.238 nvme0n1 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.238 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.239 06:40:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.498 nvme0n1 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.498 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.757 nvme0n1 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:40.757 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.016 nvme0n1 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.016 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.301 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.652 nvme0n1 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.652 06:40:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.652 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.653 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.974 nvme0n1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:41.974 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:41.975 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:41.975 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.234 nvme0n1 00:34:42.234 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.234 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.234 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.234 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.234 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:42.492 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.493 06:40:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.752 nvme0n1 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:42.752 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.011 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.270 nvme0n1 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:43.270 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.271 06:40:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.848 nvme0n1 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:43.848 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.415 nvme0n1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:44.415 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.416 06:40:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.983 nvme0n1 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:44.983 06:40:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.550 nvme0n1 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.550 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:45.809 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.377 nvme0n1 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.377 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.378 06:40:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.945 nvme0n1 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:46.945 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:46.946 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.205 nvme0n1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.205 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.464 nvme0n1 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.464 06:40:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.464 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.464 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.465 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.724 nvme0n1 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.724 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.983 nvme0n1 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:47.983 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.984 nvme0n1 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:47.984 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.243 nvme0n1 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.243 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.502 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 06:40:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 nvme0n1 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.503 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.761 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.762 nvme0n1 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:48.762 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.021 nvme0n1 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.021 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:34:49.280 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.281 nvme0n1 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.281 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.539 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.539 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:49.539 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.540 06:40:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.798 nvme0n1 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:49.798 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.799 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.057 nvme0n1 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:34:50.057 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.058 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.316 nvme0n1 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:34:50.316 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.317 06:40:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.575 nvme0n1 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.575 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.834 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.093 nvme0n1 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:51.093 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.094 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.352 nvme0n1 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.352 06:40:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.610 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.611 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 nvme0n1 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:51.869 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:51.870 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.437 nvme0n1 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.437 06:40:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.696 nvme0n1 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.696 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:52.955 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.214 nvme0n1 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.214 06:40:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.780 nvme0n1 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:53.780 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.037 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.038 06:40:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.605 nvme0n1 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:54.605 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.173 nvme0n1 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.173 06:40:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.740 nvme0n1 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.740 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:55.741 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.999 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 nvme0n1 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.566 06:40:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 nvme0n1 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.566 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.825 nvme0n1 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.825 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.083 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.084 nvme0n1 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.084 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.343 nvme0n1 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.343 06:40:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.602 nvme0n1 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.602 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.603 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.603 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:57.603 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.603 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.862 nvme0n1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.862 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 nvme0n1 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.121 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.122 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:58.122 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.122 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.381 nvme0n1 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.381 06:40:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.640 nvme0n1 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.640 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.899 nvme0n1 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.899 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.158 nvme0n1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.158 06:40:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.416 nvme0n1 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.416 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.675 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.934 nvme0n1 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.934 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.193 nvme0n1 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:00.193 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.194 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.452 nvme0n1 00:35:00.452 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.452 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:00.452 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:00.452 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.452 06:40:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.452 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.453 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.019 nvme0n1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.019 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.277 nvme0n1 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.277 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.536 06:40:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.794 nvme0n1 00:35:01.794 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.794 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.795 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.362 nvme0n1 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.362 06:40:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.621 nvme0n1 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.621 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MGNkMjE2NWU1NzZjMDlhZWFjYzhjZDk2NThmZjA4YmVSK6eM: 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: ]] 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDBiMmYyMTc4YzU3ZGMyOTJlODA5MDEwNzlmZThlOWU4MWFlMDA5NGEyZmQwODNkMjliMzc0NGQ0OWRjNDVlMcxGCnA=: 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:02.880 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.881 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 nvme0n1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.449 06:40:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.016 nvme0n1 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.016 06:40:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 nvme0n1 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.583 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:04.841 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MmNkZGE4M2VhYTAyMWUwYzI1NDg2OGQ0M2EyNjliMjJhOTYxNDc0OTdmNGYxMjc27y3YVg==: 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: ]] 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:MjA5MDAwMTEyMzkxZjIyNTE2OWZmODRhNDZmNjI3NTSQBbZ9: 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.842 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.408 nvme0n1 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.408 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjVhNTcyYTczMWFmNTI4Y2UwZjIxODE5MGZiMDQyZjM5NjE3ZjQ5Yzc2NTBmYmNjOGViNDg3N2Q0N2FjZTAxZbrh+2U=: 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.409 06:40:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.976 nvme0n1 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:05.976 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.977 request: 00:35:05.977 { 00:35:05.977 "name": "nvme0", 00:35:05.977 "trtype": "tcp", 00:35:05.977 "traddr": "10.0.0.1", 00:35:05.977 "adrfam": "ipv4", 00:35:05.977 "trsvcid": "4420", 00:35:05.977 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:05.977 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:05.977 "prchk_reftag": false, 00:35:05.977 "prchk_guard": false, 00:35:05.977 "hdgst": false, 00:35:05.977 "ddgst": false, 00:35:05.977 "allow_unrecognized_csi": false, 00:35:05.977 "method": "bdev_nvme_attach_controller", 00:35:05.977 "req_id": 1 00:35:05.977 } 00:35:05.977 Got JSON-RPC error response 00:35:05.977 response: 00:35:05.977 { 00:35:05.977 "code": -5, 00:35:05.977 "message": "Input/output error" 00:35:05.977 } 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:05.977 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.236 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.236 request: 00:35:06.236 { 00:35:06.236 "name": "nvme0", 00:35:06.236 "trtype": "tcp", 00:35:06.236 "traddr": "10.0.0.1", 00:35:06.236 "adrfam": "ipv4", 00:35:06.236 "trsvcid": "4420", 00:35:06.236 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:06.236 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:06.236 "prchk_reftag": false, 00:35:06.236 "prchk_guard": false, 00:35:06.236 "hdgst": false, 00:35:06.236 "ddgst": false, 00:35:06.236 "dhchap_key": "key2", 00:35:06.236 "allow_unrecognized_csi": false, 00:35:06.236 "method": "bdev_nvme_attach_controller", 00:35:06.236 "req_id": 1 00:35:06.236 } 00:35:06.236 Got JSON-RPC error response 00:35:06.236 response: 00:35:06.236 { 00:35:06.236 "code": -5, 00:35:06.236 "message": "Input/output error" 00:35:06.236 } 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.237 request: 00:35:06.237 { 00:35:06.237 "name": "nvme0", 00:35:06.237 "trtype": "tcp", 00:35:06.237 "traddr": "10.0.0.1", 00:35:06.237 "adrfam": "ipv4", 00:35:06.237 "trsvcid": "4420", 00:35:06.237 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:06.237 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:06.237 "prchk_reftag": false, 00:35:06.237 "prchk_guard": false, 00:35:06.237 "hdgst": false, 00:35:06.237 "ddgst": false, 00:35:06.237 "dhchap_key": "key1", 00:35:06.237 "dhchap_ctrlr_key": "ckey2", 00:35:06.237 "allow_unrecognized_csi": false, 00:35:06.237 "method": "bdev_nvme_attach_controller", 00:35:06.237 "req_id": 1 00:35:06.237 } 00:35:06.237 Got JSON-RPC error response 00:35:06.237 response: 00:35:06.237 { 00:35:06.237 "code": -5, 00:35:06.237 "message": "Input/output error" 00:35:06.237 } 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.237 06:40:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.496 nvme0n1 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.496 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.755 request: 00:35:06.755 { 00:35:06.755 "name": "nvme0", 00:35:06.755 "dhchap_key": "key1", 00:35:06.755 "dhchap_ctrlr_key": "ckey2", 00:35:06.755 "method": "bdev_nvme_set_keys", 00:35:06.755 "req_id": 1 00:35:06.755 } 00:35:06.755 Got JSON-RPC error response 00:35:06.755 response: 00:35:06.755 { 00:35:06.755 "code": -13, 00:35:06.755 "message": "Permission denied" 00:35:06.755 } 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:06.755 06:40:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:07.691 06:40:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OGJmMDExN2MwYTNiYTBiYzdlZDBmZDU0YjI5YjgwYzI2NThlMDAwNWYyNWUzMzAzX3sfwA==: 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: ]] 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NDMzMTBhMzRmNWRjOGFlYmYwNTBkZWM0OTY4NjhhYWVhZWM1MWQ0YmU5ZmM2OGQwP+b/LQ==: 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.067 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.068 nvme0n1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjI4YTA0M2E1MjVjM2Y3ZDE1YjY0MTJhNWQ0ZmUyMjJFxRh8: 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: ]] 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWE3ZDE3NWJiNjQ4N2JiZjVkMmQ4NTk5N2RhMTU0NDHoYAIX: 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.068 request: 00:35:09.068 { 00:35:09.068 "name": "nvme0", 00:35:09.068 "dhchap_key": "key2", 00:35:09.068 "dhchap_ctrlr_key": "ckey1", 00:35:09.068 "method": "bdev_nvme_set_keys", 00:35:09.068 "req_id": 1 00:35:09.068 } 00:35:09.068 Got JSON-RPC error response 00:35:09.068 response: 00:35:09.068 { 00:35:09.068 "code": -13, 00:35:09.068 "message": "Permission denied" 00:35:09.068 } 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:09.068 06:41:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:10.004 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.004 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:10.004 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.004 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.004 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:10.263 rmmod nvme_tcp 00:35:10.263 rmmod nvme_fabrics 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 1180471 ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1180471 ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1180471' 00:35:10.263 killing process with pid 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1180471 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:10.263 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:10.264 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:10.264 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:10.264 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:10.264 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:10.264 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:10.523 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:10.523 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:10.523 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.523 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:10.523 06:41:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:12.429 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:12.430 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:12.430 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:12.430 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:12.430 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:12.430 06:41:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:12.430 06:41:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:15.718 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:15.718 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:16.286 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:16.286 06:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Xm5 /tmp/spdk.key-null.9Qu /tmp/spdk.key-sha256.ZEs /tmp/spdk.key-sha384.VvY /tmp/spdk.key-sha512.QXq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:16.286 06:41:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:19.577 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:19.577 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:19.577 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:19.577 00:35:19.577 real 0m53.689s 00:35:19.577 user 0m48.647s 00:35:19.577 sys 0m12.517s 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.577 ************************************ 00:35:19.577 END TEST nvmf_auth_host 00:35:19.577 ************************************ 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.577 ************************************ 00:35:19.577 START TEST nvmf_digest 00:35:19.577 ************************************ 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:19.577 * Looking for test storage... 00:35:19.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.577 --rc genhtml_branch_coverage=1 00:35:19.577 --rc genhtml_function_coverage=1 00:35:19.577 --rc genhtml_legend=1 00:35:19.577 --rc geninfo_all_blocks=1 00:35:19.577 --rc geninfo_unexecuted_blocks=1 00:35:19.577 00:35:19.577 ' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.577 --rc genhtml_branch_coverage=1 00:35:19.577 --rc genhtml_function_coverage=1 00:35:19.577 --rc genhtml_legend=1 00:35:19.577 --rc geninfo_all_blocks=1 00:35:19.577 --rc geninfo_unexecuted_blocks=1 00:35:19.577 00:35:19.577 ' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.577 --rc genhtml_branch_coverage=1 00:35:19.577 --rc genhtml_function_coverage=1 00:35:19.577 --rc genhtml_legend=1 00:35:19.577 --rc geninfo_all_blocks=1 00:35:19.577 --rc geninfo_unexecuted_blocks=1 00:35:19.577 00:35:19.577 ' 00:35:19.577 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:19.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:19.577 --rc genhtml_branch_coverage=1 00:35:19.577 --rc genhtml_function_coverage=1 00:35:19.577 --rc genhtml_legend=1 00:35:19.578 --rc geninfo_all_blocks=1 00:35:19.578 --rc geninfo_unexecuted_blocks=1 00:35:19.578 00:35:19.578 ' 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:19.578 06:41:10 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:19.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:19.578 06:41:11 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:24.926 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:25.224 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:25.224 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:25.224 Found net devices under 0000:af:00.0: cvl_0_0 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:25.224 Found net devices under 0000:af:00.1: cvl_0_1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:25.224 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:25.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:25.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.405 ms 00:35:25.225 00:35:25.225 --- 10.0.0.2 ping statistics --- 00:35:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.225 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:25.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:25.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:35:25.225 00:35:25.225 --- 10.0.0.1 ping statistics --- 00:35:25.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:25.225 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:25.225 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:25.511 ************************************ 00:35:25.511 START TEST nvmf_digest_clean 00:35:25.511 ************************************ 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:25.511 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=1193955 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 1193955 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193955 ']' 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:25.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.512 06:41:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.512 [2024-12-13 06:41:16.959387] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:25.512 [2024-12-13 06:41:16.959431] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:25.512 [2024-12-13 06:41:17.038405] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.512 [2024-12-13 06:41:17.059692] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:25.512 [2024-12-13 06:41:17.059726] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:25.512 [2024-12-13 06:41:17.059733] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:25.512 [2024-12-13 06:41:17.059739] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:25.512 [2024-12-13 06:41:17.059744] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:25.512 [2024-12-13 06:41:17.060230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.512 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.778 null0 00:35:25.778 [2024-12-13 06:41:17.239540] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:25.778 [2024-12-13 06:41:17.263738] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1193976 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1193976 /var/tmp/bperf.sock 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1193976 ']' 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:25.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:25.778 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:25.778 [2024-12-13 06:41:17.317089] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:25.778 [2024-12-13 06:41:17.317130] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193976 ] 00:35:25.778 [2024-12-13 06:41:17.392784] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:25.779 [2024-12-13 06:41:17.415256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:26.037 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:26.037 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:26.037 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:26.037 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:26.037 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:26.296 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.296 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:26.554 nvme0n1 00:35:26.554 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:26.554 06:41:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:26.554 Running I/O for 2 seconds... 00:35:28.425 25832.00 IOPS, 100.91 MiB/s [2024-12-13T05:41:20.079Z] 25356.50 IOPS, 99.05 MiB/s 00:35:28.425 Latency(us) 00:35:28.425 [2024-12-13T05:41:20.079Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.425 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:28.425 nvme0n1 : 2.01 25353.88 99.04 0.00 0.00 5043.79 2262.55 16227.96 00:35:28.425 [2024-12-13T05:41:20.079Z] =================================================================================================================== 00:35:28.425 [2024-12-13T05:41:20.079Z] Total : 25353.88 99.04 0.00 0.00 5043.79 2262.55 16227.96 00:35:28.425 { 00:35:28.425 "results": [ 00:35:28.425 { 00:35:28.425 "job": "nvme0n1", 00:35:28.425 "core_mask": "0x2", 00:35:28.425 "workload": "randread", 00:35:28.425 "status": "finished", 00:35:28.425 "queue_depth": 128, 00:35:28.425 "io_size": 4096, 00:35:28.425 "runtime": 2.008529, 00:35:28.425 "iops": 25353.878385624503, 00:35:28.425 "mibps": 99.03858744384571, 00:35:28.425 "io_failed": 0, 00:35:28.425 "io_timeout": 0, 00:35:28.425 "avg_latency_us": 5043.794308455926, 00:35:28.425 "min_latency_us": 2262.552380952381, 00:35:28.425 "max_latency_us": 16227.961904761905 00:35:28.425 } 00:35:28.426 ], 00:35:28.426 "core_count": 1 00:35:28.426 } 00:35:28.426 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:28.684 | select(.opcode=="crc32c") 00:35:28.684 | "\(.module_name) \(.executed)"' 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1193976 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193976 ']' 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193976 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193976 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193976' 00:35:28.684 killing process with pid 1193976 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193976 00:35:28.684 Received shutdown signal, test time was about 2.000000 seconds 00:35:28.684 00:35:28.684 Latency(us) 00:35:28.684 [2024-12-13T05:41:20.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:28.684 [2024-12-13T05:41:20.338Z] =================================================================================================================== 00:35:28.684 [2024-12-13T05:41:20.338Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:28.684 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193976 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194444 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194444 /var/tmp/bperf.sock 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194444 ']' 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:28.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:28.943 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:28.943 [2024-12-13 06:41:20.531837] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:28.943 [2024-12-13 06:41:20.531883] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194444 ] 00:35:28.943 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:28.943 Zero copy mechanism will not be used. 00:35:29.202 [2024-12-13 06:41:20.607654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:29.202 [2024-12-13 06:41:20.630054] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:29.202 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:29.202 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:29.202 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:29.202 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:29.202 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:29.460 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.460 06:41:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:29.719 nvme0n1 00:35:29.719 06:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:29.719 06:41:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:29.719 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:29.719 Zero copy mechanism will not be used. 00:35:29.719 Running I/O for 2 seconds... 00:35:32.033 5592.00 IOPS, 699.00 MiB/s [2024-12-13T05:41:23.687Z] 5840.00 IOPS, 730.00 MiB/s 00:35:32.033 Latency(us) 00:35:32.033 [2024-12-13T05:41:23.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.033 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:32.033 nvme0n1 : 2.00 5841.15 730.14 0.00 0.00 2736.51 639.76 7146.54 00:35:32.033 [2024-12-13T05:41:23.687Z] =================================================================================================================== 00:35:32.033 [2024-12-13T05:41:23.687Z] Total : 5841.15 730.14 0.00 0.00 2736.51 639.76 7146.54 00:35:32.033 { 00:35:32.033 "results": [ 00:35:32.033 { 00:35:32.033 "job": "nvme0n1", 00:35:32.033 "core_mask": "0x2", 00:35:32.033 "workload": "randread", 00:35:32.033 "status": "finished", 00:35:32.033 "queue_depth": 16, 00:35:32.033 "io_size": 131072, 00:35:32.033 "runtime": 2.002345, 00:35:32.033 "iops": 5841.151250159189, 00:35:32.033 "mibps": 730.1439062698986, 00:35:32.033 "io_failed": 0, 00:35:32.033 "io_timeout": 0, 00:35:32.033 "avg_latency_us": 2736.5097570190865, 00:35:32.033 "min_latency_us": 639.7561904761905, 00:35:32.033 "max_latency_us": 7146.544761904762 00:35:32.033 } 00:35:32.033 ], 00:35:32.033 "core_count": 1 00:35:32.033 } 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:32.033 | select(.opcode=="crc32c") 00:35:32.033 | "\(.module_name) \(.executed)"' 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194444 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194444 ']' 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194444 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194444 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194444' 00:35:32.033 killing process with pid 1194444 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194444 00:35:32.033 Received shutdown signal, test time was about 2.000000 seconds 00:35:32.033 00:35:32.033 Latency(us) 00:35:32.033 [2024-12-13T05:41:23.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:32.033 [2024-12-13T05:41:23.687Z] =================================================================================================================== 00:35:32.033 [2024-12-13T05:41:23.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:32.033 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194444 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1194959 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1194959 /var/tmp/bperf.sock 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1194959 ']' 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:32.292 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:32.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:32.293 [2024-12-13 06:41:23.767820] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:32.293 [2024-12-13 06:41:23.767867] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194959 ] 00:35:32.293 [2024-12-13 06:41:23.843695] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:32.293 [2024-12-13 06:41:23.865893] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:32.293 06:41:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:32.552 06:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:32.552 06:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:33.119 nvme0n1 00:35:33.119 06:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:33.119 06:41:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:33.119 Running I/O for 2 seconds... 00:35:34.990 28561.00 IOPS, 111.57 MiB/s [2024-12-13T05:41:26.903Z] 28718.50 IOPS, 112.18 MiB/s 00:35:35.249 Latency(us) 00:35:35.249 [2024-12-13T05:41:26.903Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.249 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:35.249 nvme0n1 : 2.01 28723.23 112.20 0.00 0.00 4450.51 1786.64 13481.69 00:35:35.249 [2024-12-13T05:41:26.903Z] =================================================================================================================== 00:35:35.249 [2024-12-13T05:41:26.903Z] Total : 28723.23 112.20 0.00 0.00 4450.51 1786.64 13481.69 00:35:35.249 { 00:35:35.249 "results": [ 00:35:35.249 { 00:35:35.249 "job": "nvme0n1", 00:35:35.249 "core_mask": "0x2", 00:35:35.249 "workload": "randwrite", 00:35:35.250 "status": "finished", 00:35:35.250 "queue_depth": 128, 00:35:35.250 "io_size": 4096, 00:35:35.250 "runtime": 2.006355, 00:35:35.250 "iops": 28723.231930540707, 00:35:35.250 "mibps": 112.20012472867464, 00:35:35.250 "io_failed": 0, 00:35:35.250 "io_timeout": 0, 00:35:35.250 "avg_latency_us": 4450.506671293967, 00:35:35.250 "min_latency_us": 1786.6361904761904, 00:35:35.250 "max_latency_us": 13481.691428571428 00:35:35.250 } 00:35:35.250 ], 00:35:35.250 "core_count": 1 00:35:35.250 } 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:35.250 | select(.opcode=="crc32c") 00:35:35.250 | "\(.module_name) \(.executed)"' 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1194959 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1194959 ']' 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1194959 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:35.250 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1194959 00:35:35.509 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:35.509 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:35.509 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1194959' 00:35:35.509 killing process with pid 1194959 00:35:35.509 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1194959 00:35:35.509 Received shutdown signal, test time was about 2.000000 seconds 00:35:35.509 00:35:35.509 Latency(us) 00:35:35.509 [2024-12-13T05:41:27.163Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:35.509 [2024-12-13T05:41:27.163Z] =================================================================================================================== 00:35:35.509 [2024-12-13T05:41:27.163Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:35.509 06:41:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1194959 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1195566 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1195566 /var/tmp/bperf.sock 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1195566 ']' 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:35.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:35.509 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:35.509 [2024-12-13 06:41:27.113236] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:35.509 [2024-12-13 06:41:27.113284] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195566 ] 00:35:35.509 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:35.509 Zero copy mechanism will not be used. 00:35:35.768 [2024-12-13 06:41:27.186213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:35.768 [2024-12-13 06:41:27.205796] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:35.768 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:35.768 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:35.768 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:35.768 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:35.768 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:36.027 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.027 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:36.286 nvme0n1 00:35:36.286 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:36.286 06:41:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:36.544 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:36.544 Zero copy mechanism will not be used. 00:35:36.544 Running I/O for 2 seconds... 00:35:38.417 6332.00 IOPS, 791.50 MiB/s [2024-12-13T05:41:30.071Z] 6308.00 IOPS, 788.50 MiB/s 00:35:38.417 Latency(us) 00:35:38.417 [2024-12-13T05:41:30.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.417 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:38.417 nvme0n1 : 2.00 6303.89 787.99 0.00 0.00 2533.38 1810.04 6990.51 00:35:38.417 [2024-12-13T05:41:30.071Z] =================================================================================================================== 00:35:38.417 [2024-12-13T05:41:30.071Z] Total : 6303.89 787.99 0.00 0.00 2533.38 1810.04 6990.51 00:35:38.417 { 00:35:38.417 "results": [ 00:35:38.417 { 00:35:38.417 "job": "nvme0n1", 00:35:38.417 "core_mask": "0x2", 00:35:38.417 "workload": "randwrite", 00:35:38.417 "status": "finished", 00:35:38.417 "queue_depth": 16, 00:35:38.417 "io_size": 131072, 00:35:38.417 "runtime": 2.004317, 00:35:38.417 "iops": 6303.893046858357, 00:35:38.417 "mibps": 787.9866308572946, 00:35:38.417 "io_failed": 0, 00:35:38.417 "io_timeout": 0, 00:35:38.417 "avg_latency_us": 2533.379743494074, 00:35:38.417 "min_latency_us": 1810.0419047619048, 00:35:38.417 "max_latency_us": 6990.506666666667 00:35:38.417 } 00:35:38.417 ], 00:35:38.417 "core_count": 1 00:35:38.417 } 00:35:38.417 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:38.417 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:38.417 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:38.417 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:38.417 | select(.opcode=="crc32c") 00:35:38.417 | "\(.module_name) \(.executed)"' 00:35:38.417 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1195566 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1195566 ']' 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1195566 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1195566 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1195566' 00:35:38.677 killing process with pid 1195566 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1195566 00:35:38.677 Received shutdown signal, test time was about 2.000000 seconds 00:35:38.677 00:35:38.677 Latency(us) 00:35:38.677 [2024-12-13T05:41:30.331Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:38.677 [2024-12-13T05:41:30.331Z] =================================================================================================================== 00:35:38.677 [2024-12-13T05:41:30.331Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:38.677 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1195566 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1193955 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1193955 ']' 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1193955 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1193955 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1193955' 00:35:38.936 killing process with pid 1193955 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1193955 00:35:38.936 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1193955 00:35:39.195 00:35:39.195 real 0m13.752s 00:35:39.195 user 0m26.259s 00:35:39.195 sys 0m4.578s 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:39.195 ************************************ 00:35:39.195 END TEST nvmf_digest_clean 00:35:39.195 ************************************ 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:39.195 ************************************ 00:35:39.195 START TEST nvmf_digest_error 00:35:39.195 ************************************ 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.195 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=1196069 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 1196069 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196069 ']' 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:39.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.196 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.196 [2024-12-13 06:41:30.780861] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:39.196 [2024-12-13 06:41:30.780910] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:39.454 [2024-12-13 06:41:30.862546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.455 [2024-12-13 06:41:30.883327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:39.455 [2024-12-13 06:41:30.883363] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:39.455 [2024-12-13 06:41:30.883371] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:39.455 [2024-12-13 06:41:30.883377] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:39.455 [2024-12-13 06:41:30.883382] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:39.455 [2024-12-13 06:41:30.883878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.455 [2024-12-13 06:41:30.968355] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.455 06:41:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.455 null0 00:35:39.455 [2024-12-13 06:41:31.054239] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:39.455 [2024-12-13 06:41:31.078435] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196265 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196265 /var/tmp/bperf.sock 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196265 ']' 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:39.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:39.455 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.714 [2024-12-13 06:41:31.131008] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:39.714 [2024-12-13 06:41:31.131049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196265 ] 00:35:39.714 [2024-12-13 06:41:31.207535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.714 [2024-12-13 06:41:31.229855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:39.714 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:39.714 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:39.714 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:39.714 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:39.972 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:40.540 nvme0n1 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:40.540 06:41:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:40.540 Running I/O for 2 seconds... 00:35:40.540 [2024-12-13 06:41:32.100164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.100195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.100204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.112209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.112235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:15076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.112244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.121005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.121028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:23431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.121036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.133264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.133286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.133296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.145850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.145873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.145881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.157572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.157595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:14483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.157604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.170728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.170752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.170761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.181818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.181839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:3428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.181847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.540 [2024-12-13 06:41:32.190430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.540 [2024-12-13 06:41:32.190462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.540 [2024-12-13 06:41:32.190472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.200194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.200216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:4207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.200224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.209662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.209683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.209691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.217645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.217667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.217675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.229404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.229425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.229434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.240699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.240721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.240730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.249125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.249147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:16805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.249156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.260598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.260628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.260636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.270311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.270334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.270342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.279345] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.799 [2024-12-13 06:41:32.279366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:11341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.799 [2024-12-13 06:41:32.279374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.799 [2024-12-13 06:41:32.288267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.288288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.288296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.296446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.296474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:19604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.296482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.306723] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.306745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.306753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.316055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.316077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.316085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.324824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.324845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:13054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.324853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.333959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.333979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.333988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.343753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.343775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:467 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.343783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.352511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.352533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.352544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.363111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.363131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.363140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.373551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.373572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.373581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.382553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.382575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.382583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.393645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.393667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:12664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.393675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.404695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.404717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:12334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.404725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.417328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.417350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:12225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.417359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.427814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.427836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.427844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.436129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.436150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:20085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.436158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:40.800 [2024-12-13 06:41:32.446033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:40.800 [2024-12-13 06:41:32.446059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:40.800 [2024-12-13 06:41:32.446067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.456305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.456328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:23248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.456337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.465533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.465554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:21282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.465562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.474240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.474262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:8722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.474270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.483350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.483370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15026 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.483378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.492197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.492218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.492226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.503018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.503039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.503048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.512515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.512536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.512544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.522334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.522354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:17866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.522362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.530441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.530468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.530476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.541871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.541892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.541900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.550208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.550230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23042 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.550237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.560732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.560754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:14083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.560762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.568978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.568999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.569006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.579057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.579077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.579086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.588346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.588367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.588375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.598921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.598941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.598949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.607524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.607547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:22951 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.607555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.616350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.616371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.616379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.625383] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.625403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.625411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.634508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.634529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.634537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.646347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.646369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:1788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.646377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.654372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.654393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:17709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.654401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.664797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.664817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:20065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.664825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.674019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.674040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.674048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.684316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.684336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.684344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.693534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.693557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.060 [2024-12-13 06:41:32.693566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.060 [2024-12-13 06:41:32.702382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.060 [2024-12-13 06:41:32.702404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:11602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.061 [2024-12-13 06:41:32.702412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.061 [2024-12-13 06:41:32.712222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.061 [2024-12-13 06:41:32.712243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.061 [2024-12-13 06:41:32.712252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.721337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.721358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:19866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.721366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.730552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.730573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:12537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.730581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.738961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.738982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:22657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.738989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.750057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.750077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.750086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.759938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.759958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:4088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.759971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.769678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.769698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.769709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.777896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.777916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.777924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.788795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.788816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.788824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.800942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.800963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:3520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.800971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.811768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.811788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.811796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.820387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.820407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.820416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.831655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.831676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:7956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.831684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.843096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.843116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.843124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.320 [2024-12-13 06:41:32.851849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.320 [2024-12-13 06:41:32.851870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:3848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.320 [2024-12-13 06:41:32.851878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.862011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.862035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.862044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.870799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.870820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.870828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.882278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.882300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.882308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.894154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.894175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:10971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.894183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.902843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.902864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.902873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.914688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.914709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:9337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.914717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.923347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.923368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.923376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.935258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.935278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.935286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.948709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.948729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:3850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.948737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.957071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.957091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.957099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.321 [2024-12-13 06:41:32.968242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.321 [2024-12-13 06:41:32.968262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.321 [2024-12-13 06:41:32.968271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:32.976107] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:32.976127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:32.976135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:32.987767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:32.987789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:32.987797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:32.997536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:32.997557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:32.997565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.007404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.007425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.007434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.015872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.015892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.015900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.025297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.025317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.025325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.034038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.034059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.034070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.043672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.043701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.052996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.053016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.053024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.060857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.060878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.060886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.070759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.070780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.070788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.082493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.082513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:22154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.082521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 25582.00 IOPS, 99.93 MiB/s [2024-12-13T05:41:33.234Z] [2024-12-13 06:41:33.095144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.095165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:13887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.095174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.103363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.103384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.103392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.113646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.113666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.113674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.125753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.125775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:18909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.125783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.134610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.134630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.134638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.145686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.145706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:17080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.145714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.153668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.153689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.153697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.580 [2024-12-13 06:41:33.165728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.580 [2024-12-13 06:41:33.165749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.580 [2024-12-13 06:41:33.165757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.176984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.177005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.177013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.184740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.184760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:5293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.184768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.194375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.194395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:1752 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.194403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.205495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.205515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.205527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.214072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.214094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:24410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.214102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.581 [2024-12-13 06:41:33.225278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.581 [2024-12-13 06:41:33.225300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:16138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.581 [2024-12-13 06:41:33.225308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.235308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.235329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21473 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.235338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.244066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.244086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.244095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.255550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.255571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.255579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.265617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.265638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:3340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.265647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.274780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.274800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.274808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.286166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.286187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:2296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.286195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.294825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.294849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:8329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.839 [2024-12-13 06:41:33.294857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.839 [2024-12-13 06:41:33.307767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.839 [2024-12-13 06:41:33.307787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.307795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.317698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.317719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:4252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.317727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.326094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.326116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:11292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.326123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.335568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.335588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:16958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.335597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.344881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.344900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.344908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.352934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.352954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:10574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.352962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.364157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.364178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:16874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.364186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.375276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.375297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.375305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.387130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.387151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.387159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.397885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.397906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.397914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.410108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.410130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.410138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.418932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.418954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:19217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.418962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.429091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.429111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.429119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.439942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.439963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.439971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.448887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.448909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.448917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.457697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.457718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.457726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.467104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.467127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.467136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.475861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.475882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2590 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.475890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:41.840 [2024-12-13 06:41:33.485380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:41.840 [2024-12-13 06:41:33.485401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:7850 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:41.840 [2024-12-13 06:41:33.485409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.495569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.495590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.495598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.503980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.504001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.504008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.515390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.515410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.515418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.526771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.526792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.526800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.539057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.539078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:16379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.539086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.547467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.547488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:9195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.547496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.557573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.557595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.557603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.568100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.568123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.568131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.578296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.578317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:10715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.578325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.586687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.099 [2024-12-13 06:41:33.586710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.099 [2024-12-13 06:41:33.586718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.099 [2024-12-13 06:41:33.599754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.599776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:22587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.599785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.612079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.612101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:16295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.612109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.624115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.624138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:3724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.624148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.632557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.632578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.632586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.644988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.645010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:10900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.645021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.655826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.655848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:12930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.655856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.664823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.664844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.664852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.676842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.676863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:9348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.676872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.685593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.685615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:4287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.685623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.696859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.696879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.696887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.705429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.705458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:11273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.705467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.714163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.714185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.714194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.723745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.723767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:14914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.723776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.735557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.735586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:23536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.735594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.100 [2024-12-13 06:41:33.745365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.100 [2024-12-13 06:41:33.745386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10087 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.100 [2024-12-13 06:41:33.745394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.755932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.755955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.755964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.764233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.764260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:2645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.764268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.775377] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.775398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15609 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.775406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.786915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.786937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:24242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.786945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.799329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.799351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.799359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.807276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.807298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.807306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.817746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.817768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.817777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.827597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.827617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:21226 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.827625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.837518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.837539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.359 [2024-12-13 06:41:33.837547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.359 [2024-12-13 06:41:33.845145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.359 [2024-12-13 06:41:33.845166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:17786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.845174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.856200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.856221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.856229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.866316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.866337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:22464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.866344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.877322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.877343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4037 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.877351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.885953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.885976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:7019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.885985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.897167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.897189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:8988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.897198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.906159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.906180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:403 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.906192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.915223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.915244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.915252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.924631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.924652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.924660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.932756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.932777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.932785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.944942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.944964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:13169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.944972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.955344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.955365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.955374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.963484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.963506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.963514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.974418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.974440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.974454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.983206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.983227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.983235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:33.993001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:33.993023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:33.993030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:34.001055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:34.001077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:34.001085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.360 [2024-12-13 06:41:34.010736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.360 [2024-12-13 06:41:34.010757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.360 [2024-12-13 06:41:34.010765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.020730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.020752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:3327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.020760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.030304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.030324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.030332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.038503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.038525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.038533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.047650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.047672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.047680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.058399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.058419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.058427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.069117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.069138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14815 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.069149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 [2024-12-13 06:41:34.082489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d606e0) 00:35:42.619 [2024-12-13 06:41:34.082510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:42.619 [2024-12-13 06:41:34.082518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:42.619 25434.00 IOPS, 99.35 MiB/s 00:35:42.619 Latency(us) 00:35:42.619 [2024-12-13T05:41:34.273Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.619 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:42.619 nvme0n1 : 2.04 24947.24 97.45 0.00 0.00 5025.20 2574.63 56673.04 00:35:42.619 [2024-12-13T05:41:34.273Z] =================================================================================================================== 00:35:42.619 [2024-12-13T05:41:34.273Z] Total : 24947.24 97.45 0.00 0.00 5025.20 2574.63 56673.04 00:35:42.619 { 00:35:42.619 "results": [ 00:35:42.619 { 00:35:42.619 "job": "nvme0n1", 00:35:42.619 "core_mask": "0x2", 00:35:42.619 "workload": "randread", 00:35:42.619 "status": "finished", 00:35:42.619 "queue_depth": 128, 00:35:42.619 "io_size": 4096, 00:35:42.619 "runtime": 2.044154, 00:35:42.619 "iops": 24947.239787217597, 00:35:42.619 "mibps": 97.45015541881874, 00:35:42.619 "io_failed": 0, 00:35:42.619 "io_timeout": 0, 00:35:42.619 "avg_latency_us": 5025.198809841295, 00:35:42.619 "min_latency_us": 2574.6285714285714, 00:35:42.619 "max_latency_us": 56673.03619047619 00:35:42.619 } 00:35:42.619 ], 00:35:42.619 "core_count": 1 00:35:42.619 } 00:35:42.619 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:42.619 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:42.619 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:42.619 | .driver_specific 00:35:42.619 | .nvme_error 00:35:42.619 | .status_code 00:35:42.619 | .command_transient_transport_error' 00:35:42.619 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 199 > 0 )) 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196265 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196265 ']' 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196265 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196265 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196265' 00:35:42.878 killing process with pid 1196265 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196265 00:35:42.878 Received shutdown signal, test time was about 2.000000 seconds 00:35:42.878 00:35:42.878 Latency(us) 00:35:42.878 [2024-12-13T05:41:34.532Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:42.878 [2024-12-13T05:41:34.532Z] =================================================================================================================== 00:35:42.878 [2024-12-13T05:41:34.532Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:42.878 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196265 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1196749 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1196749 /var/tmp/bperf.sock 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1196749 ']' 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:43.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.138 [2024-12-13 06:41:34.608944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:43.138 [2024-12-13 06:41:34.608995] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196749 ] 00:35:43.138 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:43.138 Zero copy mechanism will not be used. 00:35:43.138 [2024-12-13 06:41:34.681043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:43.138 [2024-12-13 06:41:34.702844] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:43.138 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.397 06:41:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:43.969 nvme0n1 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:43.969 06:41:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:43.969 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:43.969 Zero copy mechanism will not be used. 00:35:43.969 Running I/O for 2 seconds... 00:35:43.969 [2024-12-13 06:41:35.466407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.969 [2024-12-13 06:41:35.466442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.969 [2024-12-13 06:41:35.466458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.969 [2024-12-13 06:41:35.471647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.969 [2024-12-13 06:41:35.471673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.969 [2024-12-13 06:41:35.471682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.969 [2024-12-13 06:41:35.476188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.969 [2024-12-13 06:41:35.476211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.969 [2024-12-13 06:41:35.476220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.969 [2024-12-13 06:41:35.480745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.969 [2024-12-13 06:41:35.480768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.969 [2024-12-13 06:41:35.480777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.969 [2024-12-13 06:41:35.485218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.969 [2024-12-13 06:41:35.485240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.969 [2024-12-13 06:41:35.485250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.969 [2024-12-13 06:41:35.489807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.489829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.489838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.494534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.494561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.494570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.499072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.499094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.499102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.503563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.503584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.503592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.508027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.508048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.508056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.512519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.512541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.512549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.516978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.517000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.517008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.521472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.521493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.521502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.525984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.526005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.530505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.530527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.530535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.535035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.535057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.535065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.539480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.539501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.539509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.543871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.543893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.543901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.548258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.548279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.548287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.552806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.552827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.552836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.557442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.557468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.557477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.562668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.562691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.562700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.567776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.567798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.567806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.572347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.572372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.572380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.577671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.577694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.577702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.583758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.583779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.583788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.586703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.586725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.586733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.592205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.592227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.592236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.597376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.597398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.597406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.602948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.602970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.602979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.609222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.609244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.609252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.614290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.614312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.614320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:43.970 [2024-12-13 06:41:35.619461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:43.970 [2024-12-13 06:41:35.619483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:43.970 [2024-12-13 06:41:35.619492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.624811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.624834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.624842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.630106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.630128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.630137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.635630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.635652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.635661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.641717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.641739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.641747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.646851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.646872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.646881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.652102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.652123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.652131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.657246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.657267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.657275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.662714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.662736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.662751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.668768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.668791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.668799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.673999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.674020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.674029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.679137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.679161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.679169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.684503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.684525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.684533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.689753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.689775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.689783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.695871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.695892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.695901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.701088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.701110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.701118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.705869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.705892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.705900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.710479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.710503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.710511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.715732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.715754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.715762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.720996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.721018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.721027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.726211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.726236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.726245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.731638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.231 [2024-12-13 06:41:35.731660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.231 [2024-12-13 06:41:35.731668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.231 [2024-12-13 06:41:35.736225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.736247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.736255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.740632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.740654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.740662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.745062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.745084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.745092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.749510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.749532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.749541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.754141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.754163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.754172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.758677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.758698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.758706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.763215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.763237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.763246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.768030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.768053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.768061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.772637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.772659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.772667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.777175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.777197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.777205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.781747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.781770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.781778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.786611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.786633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.786641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.791170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.791192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.791203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.795728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.795750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.795758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.800318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.800340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.800348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.804702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.804723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.804732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.809144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.809165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.809173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.813587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.813609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.818042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.818064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.818072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.822534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.822554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.822562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.826906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.826927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.826935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.831412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.831433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.831442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.835815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.835836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.835844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.840388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.840411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.840419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.844998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.845019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.845027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.849553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.849574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.849582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.854021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.232 [2024-12-13 06:41:35.854042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.232 [2024-12-13 06:41:35.854051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.232 [2024-12-13 06:41:35.858446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.858476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.858484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.233 [2024-12-13 06:41:35.863038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.863060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.863068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.233 [2024-12-13 06:41:35.867574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.867596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.867607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.233 [2024-12-13 06:41:35.872043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.872064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.872072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.233 [2024-12-13 06:41:35.876618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.876639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.876647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.233 [2024-12-13 06:41:35.881127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.233 [2024-12-13 06:41:35.881148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.233 [2024-12-13 06:41:35.881157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.885727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.885748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.885757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.890494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.890515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.890524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.895237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.895259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.895267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.899804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.899826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.899834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.904400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.904422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.904431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.908973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.908998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.909006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.913494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.913515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.913523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.917936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.917958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.917966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.922479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.922500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.922509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.927140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.927161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.927169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.931721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.931742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.931751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.936261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.936282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.936290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.940841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.940862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.940870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.945439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.945467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.945475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.950094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.950115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.950124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.954700] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.954721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.954729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.959281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.959303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.959311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.963812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.963833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.963841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.968464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.968485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.968493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.973070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.973092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.973100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.977627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.977649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.977659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.982153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.982174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.982183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.986644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.986666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.986677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.991134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.991160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.991168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:35.995639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.493 [2024-12-13 06:41:35.995661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.493 [2024-12-13 06:41:35.995669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.493 [2024-12-13 06:41:36.000067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.000089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.000097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.004426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.004452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.004461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.008806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.008827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.008835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.013298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.013320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.013328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.017796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.017818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.022243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.022265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.022274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.026640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.026662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.026670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.030927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.030948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.030956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.035313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.035342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.035350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.040028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.040050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.040057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.045626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.045649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.045658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.051204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.051227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.051235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.056482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.056504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.056513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.061810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.061832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.061840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.067139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.067161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.067172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.073097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.073119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.078544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.078565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.078573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.084093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.084115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.084123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.089505] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.089526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.089534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.094990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.095012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.095020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.099939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.099961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.099970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.105030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.105052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.105060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.110338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.110358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.110366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.115545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.115572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.115580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.120802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.120825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.120833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.126138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.126159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.126167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.131574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.131604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.494 [2024-12-13 06:41:36.136940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.494 [2024-12-13 06:41:36.136961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.494 [2024-12-13 06:41:36.136968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.495 [2024-12-13 06:41:36.142298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.495 [2024-12-13 06:41:36.142319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.495 [2024-12-13 06:41:36.142327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.147510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.147532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.147541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.152901] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.152923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.152931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.158545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.158567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.158575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.163917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.163939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.163947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.169305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.169327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.169335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.174648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.174670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.174678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.179798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.179819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.179827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.185082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.754 [2024-12-13 06:41:36.185104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.754 [2024-12-13 06:41:36.185111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.754 [2024-12-13 06:41:36.190384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.190405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.190414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.195542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.195563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.195571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.200600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.200620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.200628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.205958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.205980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.205991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.211379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.211401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.211409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.217022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.217044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.217052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.222503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.222525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.222533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.228002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.228025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.228033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.233418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.233441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.233454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.239747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.239770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.239778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.247063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.247086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.247095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.254054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.254077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.254086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.261661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.261688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.261697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.270128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.270152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.270161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.278162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.278184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.278193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.285950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.285973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.285982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.294143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.294166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.294174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.302466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.302488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.302496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.310217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.310239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.310247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.316919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.316942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.316950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.321403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.321424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.321432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.329628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.329651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.329659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.336536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.336558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.336567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.341951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.341973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.341981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.347532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.347554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.347563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.353282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.353313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.358753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.358775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.358783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.755 [2024-12-13 06:41:36.364231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.755 [2024-12-13 06:41:36.364253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.755 [2024-12-13 06:41:36.364261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.369973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.369995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.370003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.375369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.375394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.375403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.380727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.380748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.380756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.386032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.386054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.386061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.391447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.391472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.391480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.396914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.396936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.396944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:44.756 [2024-12-13 06:41:36.402582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:44.756 [2024-12-13 06:41:36.402603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:44.756 [2024-12-13 06:41:36.402611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.409259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.409282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.409291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.414836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.414861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.414868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.421861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.421884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.421892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.427818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.427841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.427849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.433616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.433639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.433647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.438656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.438680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.438688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.443967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.443990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.443998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.449329] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.449352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.449360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.455059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.455082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.455090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.460480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.460501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.460510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 5987.00 IOPS, 748.38 MiB/s [2024-12-13T05:41:36.670Z] [2024-12-13 06:41:36.467032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.467054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.467063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.472333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.472354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.472365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.477668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.477689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.477697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.483044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.483065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.483073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.488432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.488461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.488471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.493930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.493952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.493960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.499314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.499335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.499343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.504655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.504678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.504686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.510052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.510074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.510082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.515369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.515391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.515399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.520619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.520644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.520653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.526073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.526094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.526103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.531368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.531390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.531398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.536660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.536682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.536690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.541974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.016 [2024-12-13 06:41:36.541996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.016 [2024-12-13 06:41:36.542004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.016 [2024-12-13 06:41:36.547489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.547511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.547519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.552825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.552847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.552855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.558093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.558114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.558122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.563382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.563403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.563411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.569457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.569479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.569487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.576639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.576662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.576670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.584089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.584112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.584120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.592117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.592140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.592149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.597623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.597646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.597654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.602578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.602600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.602608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.607746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.607768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.607776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.612989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.613011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.613020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.618089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.618115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.618123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.623163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.623185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.623194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.628259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.628281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.628289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.633496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.633517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.633526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.638609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.638631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.638640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.643703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.643725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.643733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.648869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.648890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.648899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.654001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.654023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.654031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.659194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.659216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.659223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.017 [2024-12-13 06:41:36.664350] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.017 [2024-12-13 06:41:36.664372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.017 [2024-12-13 06:41:36.664380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.669524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.669547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.669556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.674650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.674672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.674680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.679785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.679808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.679815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.684940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.684970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.690134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.690155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.690163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.695180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.695201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.695210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.700346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.700368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.700376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.705496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.705518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.705528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.710625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.710648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.710656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.715759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.715780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.715789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.720993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.721015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.721023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.726340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.726362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.726370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.277 [2024-12-13 06:41:36.731023] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.277 [2024-12-13 06:41:36.731045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.277 [2024-12-13 06:41:36.731053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.736235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.736257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.736264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.741469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.741491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.741499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.746673] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.746695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.746703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.751851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.751875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.751882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.756936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.756966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.762064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.762085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.762093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.767207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.767229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.767237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.772499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.772521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.772529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.777771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.777793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.777801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.783074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.783097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.783106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.788410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.788432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.788440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.793576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.793598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.793606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.798704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.798726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.798734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.803680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.803701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.803709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.806422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.806444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.806459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.811502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.811523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.811531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.817479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.817500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.817509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.823119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.823141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.823149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.829020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.829042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.829050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.834315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.834336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.834344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.839685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.839706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.839717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.845067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.845088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.845096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.850432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.850460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.850469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.855876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.855897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.855905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.861138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.861159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.861168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.866430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.866457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.866465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.871731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.871752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.278 [2024-12-13 06:41:36.871759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.278 [2024-12-13 06:41:36.877038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.278 [2024-12-13 06:41:36.877059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.877066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.882311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.882333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.882341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.887587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.887608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.887616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.893778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.893800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.893808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.899287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.899309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.899317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.905398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.905420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.905428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.910739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.910761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.910769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.916216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.916236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.916244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.921508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.921529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.921537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.279 [2024-12-13 06:41:36.926706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.279 [2024-12-13 06:41:36.926728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.279 [2024-12-13 06:41:36.926736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.538 [2024-12-13 06:41:36.932208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.538 [2024-12-13 06:41:36.932228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.538 [2024-12-13 06:41:36.932239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.538 [2024-12-13 06:41:36.937479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.538 [2024-12-13 06:41:36.937500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.538 [2024-12-13 06:41:36.937508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.538 [2024-12-13 06:41:36.942686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.538 [2024-12-13 06:41:36.942707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.942715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.948119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.948140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.948148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.953791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.953812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.953820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.959811] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.959832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.959840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.965443] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.965468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.965477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.970734] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.970755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.976080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.976101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.976109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.981515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.981540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.981548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.986740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.986763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.986771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.991899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.991920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.991929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:36.997010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:36.997048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:36.997056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.002108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.002130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.002138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.007259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.007281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.007290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.012209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.012230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.012239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.017115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.017135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.017143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.021971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.021991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.022000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.026931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.026952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.026960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.032177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.032197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.032205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.038259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.038280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.038288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.044279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.044299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.044307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.049554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.049576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.049584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.054550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.054573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.054581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.060302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.060326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.060333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.066839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.066861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.066869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.072214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.072236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.072247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.076850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.076872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.076880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.081755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.081777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.081784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.539 [2024-12-13 06:41:37.087322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.539 [2024-12-13 06:41:37.087344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.539 [2024-12-13 06:41:37.087353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.092390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.092413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.092421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.098761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.098783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.098791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.104724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.104746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.104754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.110269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.110291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.110299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.115251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.115273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.115280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.120365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.120391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.120400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.125127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.125149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.125157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.131187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.131210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.131218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.136684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.136706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.136715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.141911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.141933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.141942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.147241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.147264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.147272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.153024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.153046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.153054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.156032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.156053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.156061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.161045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.161067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.161075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.165752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.165773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.165781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.171271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.171293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.171301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.177498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.177520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.177528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.183027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.183049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.183057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.540 [2024-12-13 06:41:37.187881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.540 [2024-12-13 06:41:37.187904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.540 [2024-12-13 06:41:37.187913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.192550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.192573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.192582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.197216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.197238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.197247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.202327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.202349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.202357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.207835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.207860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.207869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.212333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.212355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.212363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.216838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.216860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.216868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.221388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.221409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.221417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.226007] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.226028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.226036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.230570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.230593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.230601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.235109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.235130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.235138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.239691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.239712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.800 [2024-12-13 06:41:37.239720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.800 [2024-12-13 06:41:37.244265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.800 [2024-12-13 06:41:37.244287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.244296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.248851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.248874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.248882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.253361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.253382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.253390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.257816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.257838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.257846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.262328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.262348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.262356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.266755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.266776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.266785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.271305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.271327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.271336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.275821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.275843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.275851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.280335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.280356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.280364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.284897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.284919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.284930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.289441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.289468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.289477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.293871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.293892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.293900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.298059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.298081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.298089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.302372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.302398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.302406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.306789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.306811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.306819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.311164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.311185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.311194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.315608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.315630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.315637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.320063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.320085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.320092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.324507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.324531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.324539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.329045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.329066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.329074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.333497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.333518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.333526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.337943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.337966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.337975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.342323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.342343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.342351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.346776] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.346798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.346806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.351228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.351250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.351257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.355639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.355661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.355669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.360145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.360167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.801 [2024-12-13 06:41:37.360175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.801 [2024-12-13 06:41:37.364568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.801 [2024-12-13 06:41:37.364590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.364598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.369147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.369169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.369177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.373548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.373569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.373577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.377990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.378011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.378019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.382502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.382523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.382531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.387361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.387382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.387390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.392689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.392711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.392720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.397385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.397406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.397414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.402133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.402155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.402166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.407765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.407788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.407796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.412547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.412585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.412594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.417126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.417148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.417155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.421575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.421597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.421605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.426064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.426085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.426094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.430480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.430501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.430509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.434961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.434983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.434990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.439326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.439348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.439356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.443779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.443799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.443807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.448198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.448219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.448227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:45.802 [2024-12-13 06:41:37.452654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:45.802 [2024-12-13 06:41:37.452675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:45.802 [2024-12-13 06:41:37.452683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:46.061 [2024-12-13 06:41:37.457068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:46.061 [2024-12-13 06:41:37.457089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.061 [2024-12-13 06:41:37.457097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:46.061 [2024-12-13 06:41:37.461596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:46.061 [2024-12-13 06:41:37.461618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.061 [2024-12-13 06:41:37.461626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:46.061 6036.00 IOPS, 754.50 MiB/s [2024-12-13T05:41:37.715Z] [2024-12-13 06:41:37.466954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x148b130) 00:35:46.061 [2024-12-13 06:41:37.466975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:46.061 [2024-12-13 06:41:37.466983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:46.061 00:35:46.061 Latency(us) 00:35:46.061 [2024-12-13T05:41:37.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.061 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:46.061 nvme0n1 : 2.00 6039.18 754.90 0.00 0.00 2645.79 643.66 10673.01 00:35:46.061 [2024-12-13T05:41:37.715Z] =================================================================================================================== 00:35:46.061 [2024-12-13T05:41:37.715Z] Total : 6039.18 754.90 0.00 0.00 2645.79 643.66 10673.01 00:35:46.061 { 00:35:46.061 "results": [ 00:35:46.061 { 00:35:46.061 "job": "nvme0n1", 00:35:46.061 "core_mask": "0x2", 00:35:46.061 "workload": "randread", 00:35:46.061 "status": "finished", 00:35:46.061 "queue_depth": 16, 00:35:46.061 "io_size": 131072, 00:35:46.061 "runtime": 2.003748, 00:35:46.061 "iops": 6039.182571860334, 00:35:46.061 "mibps": 754.8978214825418, 00:35:46.061 "io_failed": 0, 00:35:46.061 "io_timeout": 0, 00:35:46.061 "avg_latency_us": 2645.7894410930226, 00:35:46.061 "min_latency_us": 643.6571428571428, 00:35:46.061 "max_latency_us": 10673.005714285715 00:35:46.061 } 00:35:46.061 ], 00:35:46.061 "core_count": 1 00:35:46.061 } 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:46.061 | .driver_specific 00:35:46.061 | .nvme_error 00:35:46.061 | .status_code 00:35:46.061 | .command_transient_transport_error' 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 391 > 0 )) 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1196749 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196749 ']' 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196749 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:46.061 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196749 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196749' 00:35:46.320 killing process with pid 1196749 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196749 00:35:46.320 Received shutdown signal, test time was about 2.000000 seconds 00:35:46.320 00:35:46.320 Latency(us) 00:35:46.320 [2024-12-13T05:41:37.974Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:46.320 [2024-12-13T05:41:37.974Z] =================================================================================================================== 00:35:46.320 [2024-12-13T05:41:37.974Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196749 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197225 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197225 /var/tmp/bperf.sock 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197225 ']' 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:46.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:46.320 06:41:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:46.320 [2024-12-13 06:41:37.953128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:46.320 [2024-12-13 06:41:37.953174] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197225 ] 00:35:46.579 [2024-12-13 06:41:38.028576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.579 [2024-12-13 06:41:38.050550] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.579 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.579 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:46.579 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:46.579 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:46.838 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:47.097 nvme0n1 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:47.097 06:41:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:47.356 Running I/O for 2 seconds... 00:35:47.356 [2024-12-13 06:41:38.849569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee4578 00:35:47.356 [2024-12-13 06:41:38.850475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:3738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.850502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.858640] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeb760 00:35:47.356 [2024-12-13 06:41:38.859560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.859580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.867042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0ff8 00:35:47.356 [2024-12-13 06:41:38.867588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.867607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.876177] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eebfd0 00:35:47.356 [2024-12-13 06:41:38.876607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.876626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.885655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee38d0 00:35:47.356 [2024-12-13 06:41:38.886196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:25592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.886214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.895043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efdeb0 00:35:47.356 [2024-12-13 06:41:38.895709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.895727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.903351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efa7d8 00:35:47.356 [2024-12-13 06:41:38.904695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:19270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.904713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.911221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6738 00:35:47.356 [2024-12-13 06:41:38.911827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.911844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.920592] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eed920 00:35:47.356 [2024-12-13 06:41:38.921305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.921324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.929972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5ec8 00:35:47.356 [2024-12-13 06:41:38.930745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:15316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.930763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.938769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eddc00 00:35:47.356 [2024-12-13 06:41:38.939553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.939574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.949635] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef4b08 00:35:47.356 [2024-12-13 06:41:38.950976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:11801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.950993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.959040] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efb048 00:35:47.356 [2024-12-13 06:41:38.960486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:8004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.960504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.965341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eddc00 00:35:47.356 [2024-12-13 06:41:38.965887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:15194 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.965906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.974778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee8088 00:35:47.356 [2024-12-13 06:41:38.975596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.975614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.983628] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeaab8 00:35:47.356 [2024-12-13 06:41:38.984193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:22815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.984211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:38.994646] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeee38 00:35:47.356 [2024-12-13 06:41:38.995936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:38.995955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:47.356 [2024-12-13 06:41:39.004039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee9e10 00:35:47.356 [2024-12-13 06:41:39.005443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.356 [2024-12-13 06:41:39.005464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.010511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efd208 00:35:47.616 [2024-12-13 06:41:39.011193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.011211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.019107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5ec8 00:35:47.616 [2024-12-13 06:41:39.019677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3573 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.019701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.028482] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8a50 00:35:47.616 [2024-12-13 06:41:39.029152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.029171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.037811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efb8b8 00:35:47.616 [2024-12-13 06:41:39.038678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:17853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.038696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.047140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee99d8 00:35:47.616 [2024-12-13 06:41:39.048118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.048136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.055509] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6890 00:35:47.616 [2024-12-13 06:41:39.056146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:16345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.056164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.064538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6738 00:35:47.616 [2024-12-13 06:41:39.064981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:9354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.065000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.074800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef81e0 00:35:47.616 [2024-12-13 06:41:39.076021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.076039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.084147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0350 00:35:47.616 [2024-12-13 06:41:39.085491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.085509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.093479] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee49b0 00:35:47.616 [2024-12-13 06:41:39.094871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.094889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.099864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef3e60 00:35:47.616 [2024-12-13 06:41:39.100427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:8498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.100445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.109503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:47.616 [2024-12-13 06:41:39.110299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.110317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.118359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6020 00:35:47.616 [2024-12-13 06:41:39.119240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.119258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.127726] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee12d8 00:35:47.616 [2024-12-13 06:41:39.128701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.128719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.137055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eff3c8 00:35:47.616 [2024-12-13 06:41:39.138160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.138178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.146421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef4298 00:35:47.616 [2024-12-13 06:41:39.147597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:2001 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.147615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.155545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee8d30 00:35:47.616 [2024-12-13 06:41:39.156761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.156779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.162988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee0630 00:35:47.616 [2024-12-13 06:41:39.163456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:21965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.163475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.172032] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0350 00:35:47.616 [2024-12-13 06:41:39.172721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.172739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.181968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0350 00:35:47.616 [2024-12-13 06:41:39.183180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.183199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.190842] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eff3c8 00:35:47.616 [2024-12-13 06:41:39.191758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.191776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.199256] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eecc78 00:35:47.616 [2024-12-13 06:41:39.200209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:3855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.200228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.207922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eefae0 00:35:47.616 [2024-12-13 06:41:39.208638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:16801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.208657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.217203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5a90 00:35:47.616 [2024-12-13 06:41:39.217769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:11734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.217788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.226209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6fa8 00:35:47.616 [2024-12-13 06:41:39.227085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6962 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.227104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.235351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eff3c8 00:35:47.616 [2024-12-13 06:41:39.236227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.236246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:47.616 [2024-12-13 06:41:39.244274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef57b0 00:35:47.616 [2024-12-13 06:41:39.245187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.616 [2024-12-13 06:41:39.245205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:47.617 [2024-12-13 06:41:39.253230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5220 00:35:47.617 [2024-12-13 06:41:39.254092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:25170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.617 [2024-12-13 06:41:39.254113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:47.617 [2024-12-13 06:41:39.262105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eebb98 00:35:47.617 [2024-12-13 06:41:39.262988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.617 [2024-12-13 06:41:39.263007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.271443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee1f80 00:35:47.875 [2024-12-13 06:41:39.272114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:10211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.272132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.280641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eee190 00:35:47.875 [2024-12-13 06:41:39.281650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:13532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.281668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.289552] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efc560 00:35:47.875 [2024-12-13 06:41:39.290544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:18453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.290562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.298485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efeb58 00:35:47.875 [2024-12-13 06:41:39.299465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:25255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.299483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.308572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eebfd0 00:35:47.875 [2024-12-13 06:41:39.310031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.310049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.315064] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6cc8 00:35:47.875 [2024-12-13 06:41:39.315813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.315832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.324460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eddc00 00:35:47.875 [2024-12-13 06:41:39.325229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.325247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.333458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0bc0 00:35:47.875 [2024-12-13 06:41:39.334339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:18906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.334357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.343002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef96f8 00:35:47.875 [2024-12-13 06:41:39.343975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.343993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.351892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8618 00:35:47.875 [2024-12-13 06:41:39.352560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:21404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.352579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.360833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8618 00:35:47.875 [2024-12-13 06:41:39.361493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:17143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.361511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.370069] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef96f8 00:35:47.875 [2024-12-13 06:41:39.370912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.875 [2024-12-13 06:41:39.370931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:35:47.875 [2024-12-13 06:41:39.379480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeaab8 00:35:47.875 [2024-12-13 06:41:39.380500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:9847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.380518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.388281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee9e10 00:35:47.876 [2024-12-13 06:41:39.389065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:10939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.389084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.396498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee3060 00:35:47.876 [2024-12-13 06:41:39.397353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.397371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.406694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eec408 00:35:47.876 [2024-12-13 06:41:39.407709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:25339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.407728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.415311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0788 00:35:47.876 [2024-12-13 06:41:39.416260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:15195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.416279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.425280] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7100 00:35:47.876 [2024-12-13 06:41:39.426411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.426430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.434590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf988 00:35:47.876 [2024-12-13 06:41:39.435850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.435869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.440998] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8e88 00:35:47.876 [2024-12-13 06:41:39.441649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.441667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.452705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efa7d8 00:35:47.876 [2024-12-13 06:41:39.453983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.454001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.462235] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee01f8 00:35:47.876 [2024-12-13 06:41:39.463724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:14599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.463743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.468741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7100 00:35:47.876 [2024-12-13 06:41:39.469402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:12005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.469420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.478197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7100 00:35:47.876 [2024-12-13 06:41:39.478968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.478986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.487107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7100 00:35:47.876 [2024-12-13 06:41:39.487888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:2134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.487910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.495438] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef1430 00:35:47.876 [2024-12-13 06:41:39.496202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:11992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.496219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.506366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef3e60 00:35:47.876 [2024-12-13 06:41:39.507467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:13144 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.507485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.513683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efc998 00:35:47.876 [2024-12-13 06:41:39.514299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:10482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.514318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:47.876 [2024-12-13 06:41:39.522880] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeb760 00:35:47.876 [2024-12-13 06:41:39.523574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:24280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:47.876 [2024-12-13 06:41:39.523592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.532443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eefae0 00:35:48.135 [2024-12-13 06:41:39.533314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.533332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.542060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee99d8 00:35:48.135 [2024-12-13 06:41:39.543202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.543221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.551422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef57b0 00:35:48.135 [2024-12-13 06:41:39.552672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.552690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.559699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efac10 00:35:48.135 [2024-12-13 06:41:39.560486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.560504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.568690] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ede470 00:35:48.135 [2024-12-13 06:41:39.569485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.569504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.576852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eea680 00:35:48.135 [2024-12-13 06:41:39.577629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.577648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.587490] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eea680 00:35:48.135 [2024-12-13 06:41:39.588766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:21933 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.588785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.594595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee84c0 00:35:48.135 [2024-12-13 06:41:39.595367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.595384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.605194] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee84c0 00:35:48.135 [2024-12-13 06:41:39.606528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.606547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.614720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef5be8 00:35:48.135 [2024-12-13 06:41:39.616177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:16556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.616195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.621216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef5be8 00:35:48.135 [2024-12-13 06:41:39.621969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.621987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.632136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efe720 00:35:48.135 [2024-12-13 06:41:39.633258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:9511 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.633276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.640596] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef92c0 00:35:48.135 [2024-12-13 06:41:39.641626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:20405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.135 [2024-12-13 06:41:39.641644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:35:48.135 [2024-12-13 06:41:39.649418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5220 00:35:48.136 [2024-12-13 06:41:39.650201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:7768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.650220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.657595] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7538 00:35:48.136 [2024-12-13 06:41:39.658462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:4406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.658479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.666940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee1f80 00:35:48.136 [2024-12-13 06:41:39.667958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:17829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.667977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.677921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee0a68 00:35:48.136 [2024-12-13 06:41:39.679387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:12203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.679405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.684411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efda78 00:35:48.136 [2024-12-13 06:41:39.685156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:5720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.685174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.693755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeb328 00:35:48.136 [2024-12-13 06:41:39.694556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:14563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.694574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.704339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeb328 00:35:48.136 [2024-12-13 06:41:39.705619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:20820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.705637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.710734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6b70 00:35:48.136 [2024-12-13 06:41:39.711294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:5310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.711312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.721311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6b70 00:35:48.136 [2024-12-13 06:41:39.722453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:4035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.722474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.729735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef2948 00:35:48.136 [2024-12-13 06:41:39.730592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:14684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.730609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.738714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5a90 00:35:48.136 [2024-12-13 06:41:39.739624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.739643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.748024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5658 00:35:48.136 [2024-12-13 06:41:39.749073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.749091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.757079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef5be8 00:35:48.136 [2024-12-13 06:41:39.757664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.757683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.767299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee3d08 00:35:48.136 [2024-12-13 06:41:39.768691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10931 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.768709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.776339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef3e60 00:35:48.136 [2024-12-13 06:41:39.777723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:19436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.777741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:48.136 [2024-12-13 06:41:39.782481] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efc998 00:35:48.136 [2024-12-13 06:41:39.783154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:18069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.136 [2024-12-13 06:41:39.783172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.792168] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:48.396 [2024-12-13 06:41:39.792998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.793017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.801288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5658 00:35:48.396 [2024-12-13 06:41:39.802090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:7913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.802112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.810455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eddc00 00:35:48.396 [2024-12-13 06:41:39.810931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.810949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.821673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee4de8 00:35:48.396 [2024-12-13 06:41:39.823213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:9434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.823230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.828120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeb760 00:35:48.396 [2024-12-13 06:41:39.828953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:24032 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.828971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.838940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef3a28 00:35:48.396 [2024-12-13 06:41:39.840735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.840753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:48.396 28290.00 IOPS, 110.51 MiB/s [2024-12-13T05:41:40.050Z] [2024-12-13 06:41:39.846938] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efdeb0 00:35:48.396 [2024-12-13 06:41:39.848228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.848246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.854573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efda78 00:35:48.396 [2024-12-13 06:41:39.855261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:12099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.855278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.864058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef96f8 00:35:48.396 [2024-12-13 06:41:39.864872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.864891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.873363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee3498 00:35:48.396 [2024-12-13 06:41:39.874307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.874326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.882475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:48.396 [2024-12-13 06:41:39.882976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.882994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.891417] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ede038 00:35:48.396 [2024-12-13 06:41:39.892146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.892165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.900732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7538 00:35:48.396 [2024-12-13 06:41:39.901577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.901595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.910060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6458 00:35:48.396 [2024-12-13 06:41:39.911108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.911127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.918484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eefae0 00:35:48.396 [2024-12-13 06:41:39.919522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.919541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.927517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8618 00:35:48.396 [2024-12-13 06:41:39.928574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4053 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.928592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.936012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6fa8 00:35:48.396 [2024-12-13 06:41:39.936984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:21944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.396 [2024-12-13 06:41:39.937003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:48.396 [2024-12-13 06:41:39.947029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeea00 00:35:48.396 [2024-12-13 06:41:39.948571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:3177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.948587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.953339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee1b48 00:35:48.397 [2024-12-13 06:41:39.954115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.954134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.961797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee3498 00:35:48.397 [2024-12-13 06:41:39.962511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.962530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.972615] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7970 00:35:48.397 [2024-12-13 06:41:39.973681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:2078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.973699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.981925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efd640 00:35:48.397 [2024-12-13 06:41:39.983157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.983175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.988593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee7818 00:35:48.397 [2024-12-13 06:41:39.989302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:39.989320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:39.999376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef92c0 00:35:48.397 [2024-12-13 06:41:40.000504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:40.000522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:40.009945] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee95a0 00:35:48.397 [2024-12-13 06:41:40.011852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:19546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:40.011875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:40.017710] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6cc8 00:35:48.397 [2024-12-13 06:41:40.018572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:11508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:40.018597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:40.031118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeea00 00:35:48.397 [2024-12-13 06:41:40.032483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:6052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:40.032508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:48.397 [2024-12-13 06:41:40.040683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efe720 00:35:48.397 [2024-12-13 06:41:40.042162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.397 [2024-12-13 06:41:40.042185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:48.656 [2024-12-13 06:41:40.050276] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef92c0 00:35:48.656 [2024-12-13 06:41:40.051879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16753 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.656 [2024-12-13 06:41:40.051898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:35:48.656 [2024-12-13 06:41:40.056787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efb480 00:35:48.656 [2024-12-13 06:41:40.057546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:12336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.656 [2024-12-13 06:41:40.057565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:35:48.656 [2024-12-13 06:41:40.067316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8618 00:35:48.656 [2024-12-13 06:41:40.068193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.656 [2024-12-13 06:41:40.068214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:35:48.656 [2024-12-13 06:41:40.076155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efbcf0 00:35:48.656 [2024-12-13 06:41:40.077469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.656 [2024-12-13 06:41:40.077489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:48.656 [2024-12-13 06:41:40.085769] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef4298 00:35:48.656 [2024-12-13 06:41:40.086589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:11480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.656 [2024-12-13 06:41:40.086608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.095042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee8088 00:35:48.657 [2024-12-13 06:41:40.096172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.096191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.103617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee7c50 00:35:48.657 [2024-12-13 06:41:40.104599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.104618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.112203] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee4140 00:35:48.657 [2024-12-13 06:41:40.113068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:13957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.113086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.122970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0788 00:35:48.657 [2024-12-13 06:41:40.124332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.124351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.132568] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0ff8 00:35:48.657 [2024-12-13 06:41:40.134061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.134080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.139197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee88f8 00:35:48.657 [2024-12-13 06:41:40.139965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.139984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.150327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee8088 00:35:48.657 [2024-12-13 06:41:40.151465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.151484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.160791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eea248 00:35:48.657 [2024-12-13 06:41:40.162413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:8350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.162431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.167393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee7818 00:35:48.657 [2024-12-13 06:41:40.168273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:3485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.168292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.176674] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efef90 00:35:48.657 [2024-12-13 06:41:40.177538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:12605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.177557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.186271] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf988 00:35:48.657 [2024-12-13 06:41:40.187061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:4101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.187080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.195430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee01f8 00:35:48.657 [2024-12-13 06:41:40.196211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:24087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.196230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.204890] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef46d0 00:35:48.657 [2024-12-13 06:41:40.205577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.205595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.213547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efb480 00:35:48.657 [2024-12-13 06:41:40.214791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:18865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.214809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.221384] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:48.657 [2024-12-13 06:41:40.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.222067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.231003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eed920 00:35:48.657 [2024-12-13 06:41:40.231771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:16407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.231789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.240575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eec408 00:35:48.657 [2024-12-13 06:41:40.241463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.241481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.250158] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efc998 00:35:48.657 [2024-12-13 06:41:40.251170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:21647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.251188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.259600] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5a90 00:35:48.657 [2024-12-13 06:41:40.260155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:18368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.260173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.269139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6020 00:35:48.657 [2024-12-13 06:41:40.269821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.269840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.277789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eec408 00:35:48.657 [2024-12-13 06:41:40.279044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.279065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.285662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef46d0 00:35:48.657 [2024-12-13 06:41:40.286299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:13303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.286317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.294973] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:48.657 [2024-12-13 06:41:40.295616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.295635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:35:48.657 [2024-12-13 06:41:40.305807] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef1430 00:35:48.657 [2024-12-13 06:41:40.306830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:25265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.657 [2024-12-13 06:41:40.306849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.314919] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf550 00:35:48.917 [2024-12-13 06:41:40.315941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:8707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.315960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.323495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf550 00:35:48.917 [2024-12-13 06:41:40.324395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.324413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.333101] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef7da8 00:35:48.917 [2024-12-13 06:41:40.333927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.333946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.341649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eea680 00:35:48.917 [2024-12-13 06:41:40.342533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.342552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.351220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eec840 00:35:48.917 [2024-12-13 06:41:40.352234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:16259 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.352252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.360497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eddc00 00:35:48.917 [2024-12-13 06:41:40.361065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.361084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.369129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee0ea0 00:35:48.917 [2024-12-13 06:41:40.369667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:17827 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.369687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.380776] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6458 00:35:48.917 [2024-12-13 06:41:40.382298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.382316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.387242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ede470 00:35:48.917 [2024-12-13 06:41:40.387906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.387924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.396977] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5220 00:35:48.917 [2024-12-13 06:41:40.397888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.397906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.406788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef2948 00:35:48.917 [2024-12-13 06:41:40.407842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:791 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.407860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.416135] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee0a68 00:35:48.917 [2024-12-13 06:41:40.416727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:2281 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.416745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.425046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee99d8 00:35:48.917 [2024-12-13 06:41:40.425931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:25308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.425950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.434155] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edece0 00:35:48.917 [2024-12-13 06:41:40.434972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:4488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.434990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.442855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6458 00:35:48.917 [2024-12-13 06:41:40.443660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.443678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.452147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efac10 00:35:48.917 [2024-12-13 06:41:40.452988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.453006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.461705] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee9168 00:35:48.917 [2024-12-13 06:41:40.462505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.462523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.471260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee88f8 00:35:48.917 [2024-12-13 06:41:40.472201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:14245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.917 [2024-12-13 06:41:40.472220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:48.917 [2024-12-13 06:41:40.479948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efdeb0 00:35:48.917 [2024-12-13 06:41:40.480749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.480768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.489788] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eee190 00:35:48.918 [2024-12-13 06:41:40.490829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:4667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.490848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.499370] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eff3c8 00:35:48.918 [2024-12-13 06:41:40.500452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:10062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.500471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.508495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee9168 00:35:48.918 [2024-12-13 06:41:40.509647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:2860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.509665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.518060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef92c0 00:35:48.918 [2024-12-13 06:41:40.519344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:6188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.519366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.527649] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edfdc0 00:35:48.918 [2024-12-13 06:41:40.529050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:14453 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.529068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.536942] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efd208 00:35:48.918 [2024-12-13 06:41:40.538347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.538365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.544610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee0630 00:35:48.918 [2024-12-13 06:41:40.545204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:25163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.545222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.553458] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eeff18 00:35:48.918 [2024-12-13 06:41:40.554322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.554340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:35:48.918 [2024-12-13 06:41:40.562715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eebb98 00:35:48.918 [2024-12-13 06:41:40.563642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:48.918 [2024-12-13 06:41:40.563660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:35:49.177 [2024-12-13 06:41:40.572907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eee190 00:35:49.177 [2024-12-13 06:41:40.573877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.177 [2024-12-13 06:41:40.573896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:35:49.177 [2024-12-13 06:41:40.582497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eef6a8 00:35:49.178 [2024-12-13 06:41:40.583798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:6814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.583816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.591790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef20d8 00:35:49.178 [2024-12-13 06:41:40.593082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.593101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.599773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ede470 00:35:49.178 [2024-12-13 06:41:40.601082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.601101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.607636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5a90 00:35:49.178 [2024-12-13 06:41:40.608309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.608327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.617839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efa3a0 00:35:49.178 [2024-12-13 06:41:40.618569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:17203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.618587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.627295] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef0350 00:35:49.178 [2024-12-13 06:41:40.628236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6393 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.628255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.635994] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efc998 00:35:49.178 [2024-12-13 06:41:40.636921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.636939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.645292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efa7d8 00:35:49.178 [2024-12-13 06:41:40.646232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.646249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.654697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee8d30 00:35:49.178 [2024-12-13 06:41:40.655298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:11911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.655317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.664275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef6890 00:35:49.178 [2024-12-13 06:41:40.664999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:10997 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.665018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.673180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efac10 00:35:49.178 [2024-12-13 06:41:40.674233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.674251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.682287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efdeb0 00:35:49.178 [2024-12-13 06:41:40.683259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.683277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.691243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eed4e8 00:35:49.178 [2024-12-13 06:41:40.691864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.691883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.700767] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef2948 00:35:49.178 [2024-12-13 06:41:40.701836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.701854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.710173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5658 00:35:49.178 [2024-12-13 06:41:40.711336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.711353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.719476] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef8618 00:35:49.178 [2024-12-13 06:41:40.720748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.720765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.728817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef9b30 00:35:49.178 [2024-12-13 06:41:40.730226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.730244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.735285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ef2510 00:35:49.178 [2024-12-13 06:41:40.736016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.736034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.746434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee9e10 00:35:49.178 [2024-12-13 06:41:40.747620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.747638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.754949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee5a90 00:35:49.178 [2024-12-13 06:41:40.755879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.755896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.763913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf988 00:35:49.178 [2024-12-13 06:41:40.764785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.764804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.772498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6300 00:35:49.178 [2024-12-13 06:41:40.773192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.773211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.781941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016efa7d8 00:35:49.178 [2024-12-13 06:41:40.782894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.782912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.791031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee95a0 00:35:49.178 [2024-12-13 06:41:40.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25349 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.791570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.800318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eebb98 00:35:49.178 [2024-12-13 06:41:40.801094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.801112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:35:49.178 [2024-12-13 06:41:40.808518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee3498 00:35:49.178 [2024-12-13 06:41:40.809349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.178 [2024-12-13 06:41:40.809367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:35:49.179 [2024-12-13 06:41:40.817817] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016edf550 00:35:49.179 [2024-12-13 06:41:40.818717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.179 [2024-12-13 06:41:40.818735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:49.179 [2024-12-13 06:41:40.828714] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee6fa8 00:35:49.179 [2024-12-13 06:41:40.830134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.179 [2024-12-13 06:41:40.830152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:35:49.437 [2024-12-13 06:41:40.835241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016eed4e8 00:35:49.437 [2024-12-13 06:41:40.835915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.438 [2024-12-13 06:41:40.835936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:35:49.438 28125.00 IOPS, 109.86 MiB/s [2024-12-13T05:41:41.092Z] [2024-12-13 06:41:40.845457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd00dc0) with pdu=0x200016ee4de8 00:35:49.438 [2024-12-13 06:41:40.846228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:35:49.438 [2024-12-13 06:41:40.846245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:35:49.438 00:35:49.438 Latency(us) 00:35:49.438 [2024-12-13T05:41:41.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.438 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:49.438 nvme0n1 : 2.01 28124.10 109.86 0.00 0.00 4545.20 1786.64 13169.62 00:35:49.438 [2024-12-13T05:41:41.092Z] =================================================================================================================== 00:35:49.438 [2024-12-13T05:41:41.092Z] Total : 28124.10 109.86 0.00 0.00 4545.20 1786.64 13169.62 00:35:49.438 { 00:35:49.438 "results": [ 00:35:49.438 { 00:35:49.438 "job": "nvme0n1", 00:35:49.438 "core_mask": "0x2", 00:35:49.438 "workload": "randwrite", 00:35:49.438 "status": "finished", 00:35:49.438 "queue_depth": 128, 00:35:49.438 "io_size": 4096, 00:35:49.438 "runtime": 2.006855, 00:35:49.438 "iops": 28124.104631375958, 00:35:49.438 "mibps": 109.85978371631234, 00:35:49.438 "io_failed": 0, 00:35:49.438 "io_timeout": 0, 00:35:49.438 "avg_latency_us": 4545.198872349634, 00:35:49.438 "min_latency_us": 1786.6361904761904, 00:35:49.438 "max_latency_us": 13169.615238095239 00:35:49.438 } 00:35:49.438 ], 00:35:49.438 "core_count": 1 00:35:49.438 } 00:35:49.438 06:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:49.438 06:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:49.438 06:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:49.438 | .driver_specific 00:35:49.438 | .nvme_error 00:35:49.438 | .status_code 00:35:49.438 | .command_transient_transport_error' 00:35:49.438 06:41:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197225 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197225 ']' 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197225 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:49.438 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197225 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197225' 00:35:49.697 killing process with pid 1197225 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197225 00:35:49.697 Received shutdown signal, test time was about 2.000000 seconds 00:35:49.697 00:35:49.697 Latency(us) 00:35:49.697 [2024-12-13T05:41:41.351Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.697 [2024-12-13T05:41:41.351Z] =================================================================================================================== 00:35:49.697 [2024-12-13T05:41:41.351Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197225 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1197880 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1197880 /var/tmp/bperf.sock 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1197880 ']' 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:49.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:49.697 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:49.697 [2024-12-13 06:41:41.324213] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:35:49.697 [2024-12-13 06:41:41.324259] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197880 ] 00:35:49.697 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:49.697 Zero copy mechanism will not be used. 00:35:49.956 [2024-12-13 06:41:41.400005] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.956 [2024-12-13 06:41:41.421440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.956 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.956 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:49.956 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:49.956 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:50.214 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:50.215 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.215 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.215 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.215 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:50.215 06:41:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:50.783 nvme0n1 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:35:50.783 06:41:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:50.783 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:50.783 Zero copy mechanism will not be used. 00:35:50.783 Running I/O for 2 seconds... 00:35:50.783 [2024-12-13 06:41:42.288134] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.288298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.288325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.295637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.295807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.295829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.302522] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.302646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.302667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.308839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.308978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.308998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.315312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.315495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.315515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.321617] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.321780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.321800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.327831] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.327994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.328015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.334229] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.334403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.334423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.340485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.340648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.340667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.347824] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.347979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.347999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.354209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.354354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.354374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.360472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.360605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.360624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.366725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.366897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.366916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.371926] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.371996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.372013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.376358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.376455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.376479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.783 [2024-12-13 06:41:42.380739] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.783 [2024-12-13 06:41:42.380834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.783 [2024-12-13 06:41:42.380853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.385428] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.385491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.385508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.390209] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.390283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.390301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.395598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.395664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.395682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.401216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.401291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.401308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.406566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.406647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.406665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.411208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.411311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.411331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.415917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.415979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.415997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.420590] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.420693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.420712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.424981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.425054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.425073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.429381] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.429510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.429530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:50.784 [2024-12-13 06:41:42.434208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:50.784 [2024-12-13 06:41:42.434310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:50.784 [2024-12-13 06:41:42.434330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.044 [2024-12-13 06:41:42.439081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.044 [2024-12-13 06:41:42.439192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.044 [2024-12-13 06:41:42.439212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.044 [2024-12-13 06:41:42.444335] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.044 [2024-12-13 06:41:42.444388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.044 [2024-12-13 06:41:42.444406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.044 [2024-12-13 06:41:42.449275] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.044 [2024-12-13 06:41:42.449332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.044 [2024-12-13 06:41:42.449350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.044 [2024-12-13 06:41:42.453894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.044 [2024-12-13 06:41:42.453956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.044 [2024-12-13 06:41:42.453974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.044 [2024-12-13 06:41:42.458406] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.458537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.458557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.463044] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.463101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.463119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.467630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.467699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.467718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.472352] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.472407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.472425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.477022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.477085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.477102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.481279] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.481342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.481359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.485972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.486088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.490610] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.490669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.490686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.494868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.494943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.499386] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.499459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.499478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.504109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.504174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.504192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.509036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.509094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.509112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.514081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.514177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.514197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.519358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.519490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.519510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.524299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.524349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.524367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.529430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.529535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.529554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.534979] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.535048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.535066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.539961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.540023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.540042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.544976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.545129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.545154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.550109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.550161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.550179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.554719] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.554789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.554807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.559361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.559459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.559478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.563659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.563715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.563732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.567924] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.568000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.568021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.572140] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.572239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.572258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.576365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.045 [2024-12-13 06:41:42.576477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.045 [2024-12-13 06:41:42.576496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.045 [2024-12-13 06:41:42.580594] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.580648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.580666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.584789] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.584889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.584908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.589089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.589141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.589159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.593670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.593743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.593764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.598720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.598772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.598790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.604205] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.604311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.604331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.609517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.609617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.609636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.613963] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.614032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.614050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.618652] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.618723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.618741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.623330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.623443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.623470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.627955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.628014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.628032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.632531] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.632585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.632603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.637206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.637263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.637281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.641484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.641547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.641565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.645916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.646027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.646047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.650377] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.650476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.650495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.655100] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.655202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.655221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.660068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.660118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.660136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.664974] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.665076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.665098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.669728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.669804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.669826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.674253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.674338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.674357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.678787] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.678844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.678861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.683268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.683331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.683349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.687896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.687947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.687966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.692882] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.692976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.692995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.046 [2024-12-13 06:41:42.697457] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.046 [2024-12-13 06:41:42.697513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.046 [2024-12-13 06:41:42.697531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.701948] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.702004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.702022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.706305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.706424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.706444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.710889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.710951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.710969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.715609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.715662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.715680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.721178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.721242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.721260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.725949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.726026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.726045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.730529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.730583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.730600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.735251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.735359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.735377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.739808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.739867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.739885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.744518] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.744572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.744590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.749039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.749100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.749118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.753598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.753667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.753684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.758153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.758212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.758229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.762798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.762850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.762868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.767299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.767406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.767424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.771543] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.771635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.771654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.776180] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.776251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.776270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.780900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.781027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.781046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.785825] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.785975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.785998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.791091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.791200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.791218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.796463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.796522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.796540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.801661] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.801793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.801813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.806475] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.806545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.806563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.811105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.811155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.811173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.815651] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.815754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.815773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.819968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.307 [2024-12-13 06:41:42.820032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.307 [2024-12-13 06:41:42.820050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.307 [2024-12-13 06:41:42.824643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.824786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.824804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.829185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.829246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.829264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.833519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.833575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.833593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.837778] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.837844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.837863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.842011] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.842071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.842089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.846366] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.846422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.846440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.850614] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.850675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.850693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.854840] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.854901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.854919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.859085] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.859138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.859156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.863358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.863422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.863439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.867645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.867708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.867726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.871984] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.872045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.872063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.876248] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.876298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.876316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.880496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.880559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.880577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.884720] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.884773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.884792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.888968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.889038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.889056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.893198] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.893260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.893278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.897461] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.897518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.897535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.901717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.901775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.901796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.906021] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.906076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.906094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.910260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.910322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.910340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.914523] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.914590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.914608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.918779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.918839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.918857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.923502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.923625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.923643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.929202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.929382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.929401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.935820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.936005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.936024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.941988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.308 [2024-12-13 06:41:42.942112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.308 [2024-12-13 06:41:42.942132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.308 [2024-12-13 06:41:42.947312] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.309 [2024-12-13 06:41:42.947417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.309 [2024-12-13 06:41:42.947437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.309 [2024-12-13 06:41:42.952679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.309 [2024-12-13 06:41:42.952777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.309 [2024-12-13 06:41:42.952796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.309 [2024-12-13 06:41:42.958094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.309 [2024-12-13 06:41:42.958251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.309 [2024-12-13 06:41:42.958271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.963302] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.963392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.963411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.968732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.968906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.968925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.974109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.974267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.974286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.979338] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.979465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.979483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.984519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.984616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.984635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.989952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.990122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.990140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:42.995493] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:42.995645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:42.995664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.001330] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.001390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.001408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.006364] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.006512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.006532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.011469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.011523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.011541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.016485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.016544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.016561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.021575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.021639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.021657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.027572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.027693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.027711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.032540] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.032651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.032670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.037230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.037289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.569 [2024-12-13 06:41:43.037310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.569 [2024-12-13 06:41:43.041790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.569 [2024-12-13 06:41:43.041858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.041876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.046511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.046573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.046591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.051260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.051317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.051335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.055894] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.055959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.055977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.060499] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.060560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.060578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.065188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.065257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.065275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.070512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.070568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.070585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.075735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.075788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.075806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.080630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.080712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.080730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.085443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.085570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.085589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.090093] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.090213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.090232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.094755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.094809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.094827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.099254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.099311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.099328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.103847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.103911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.103929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.108566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.108626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.108644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.113520] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.113588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.113605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.118708] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.118758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.118776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.123806] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.123867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.123884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.128559] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.128692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.128710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.133293] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.133438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.133462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.138414] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.138484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.138502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.143398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.143473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.143506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.148470] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.148528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.148546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.153761] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.153850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.153869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.159043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.159093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.570 [2024-12-13 06:41:43.159111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.570 [2024-12-13 06:41:43.164681] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.570 [2024-12-13 06:41:43.164740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.164761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.169871] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.169947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.169964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.174866] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.174948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.174967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.180042] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.180102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.180119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.185443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.185596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.185614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.190885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.190996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.191015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.196888] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.196973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.196992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.202220] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.202352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.202370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.207367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.207421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.207439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.212463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.212527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.212549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.571 [2024-12-13 06:41:43.217701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.571 [2024-12-13 06:41:43.217756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.571 [2024-12-13 06:41:43.217774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.223159] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.223210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.223227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.228358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.228414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.228432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.233878] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.233941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.233960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.239082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.239214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.239234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.244139] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.244192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.244210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.249342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.249413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.249431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.254514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.254604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.254623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.259677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.259731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.259749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.264339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.264432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.264457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.269174] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.269228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.269246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.274412] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.274486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.274504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.279367] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.279419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.279437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.831 6240.00 IOPS, 780.00 MiB/s [2024-12-13T05:41:43.485Z] [2024-12-13 06:41:43.285060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.285117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.831 [2024-12-13 06:41:43.285135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.831 [2024-12-13 06:41:43.291270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.831 [2024-12-13 06:41:43.291430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.291454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.298819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.298960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.298979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.305798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.305882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.305909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.311732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.311983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.312003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.317036] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.317300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.317319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.321717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.321993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.322012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.326225] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.326511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.326530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.330515] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.330798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.330817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.334909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.335186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.335206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.339343] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.339624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.339644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.343855] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.344126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.344145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.348223] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.348500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.348519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.352469] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.352738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.352757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.356955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.357226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.357245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.361342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.361623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.361642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.365542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.365810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.365829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.369721] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.369989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.370009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.373891] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.374169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.374188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.378102] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.378372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.378390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.382241] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.382515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.382535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.386424] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.386725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.386745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.390696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.390973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.390992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.394849] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.395127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.395146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.399033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.399303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.399322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.403157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.832 [2024-12-13 06:41:43.403424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.832 [2024-12-13 06:41:43.403506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.832 [2024-12-13 06:41:43.407558] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.407836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.407858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.412046] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.412317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.412336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.417005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.417274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.417294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.421803] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.422085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.422104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.426625] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.426884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.426903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.431725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.431968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.431988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.436818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.437092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.437111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.441727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.441990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.442009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.446715] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.446985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.447004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.451729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.451991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.452010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.456967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.457225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.457244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.462104] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.462370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.462389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.467587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.467860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.467882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.472893] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.473198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.473217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.478516] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.478780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.478800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:51.833 [2024-12-13 06:41:43.483750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:51.833 [2024-12-13 06:41:43.484027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:51.833 [2024-12-13 06:41:43.484046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.093 [2024-12-13 06:41:43.488992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.093 [2024-12-13 06:41:43.489270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.093 [2024-12-13 06:41:43.489290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.093 [2024-12-13 06:41:43.494080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.093 [2024-12-13 06:41:43.494340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.093 [2024-12-13 06:41:43.494361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.093 [2024-12-13 06:41:43.499307] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.093 [2024-12-13 06:41:43.499570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.093 [2024-12-13 06:41:43.499592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.093 [2024-12-13 06:41:43.504120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.093 [2024-12-13 06:41:43.504396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.093 [2024-12-13 06:41:43.504415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.093 [2024-12-13 06:41:43.508932] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.509207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.509226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.514197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.514461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.514481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.519496] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.519751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.519770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.524684] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.524954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.524973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.529408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.529688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.529708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.534199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.534480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.534500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.539078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.539349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.539367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.544028] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.544296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.544315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.548494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.548764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.548783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.553289] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.553559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.553578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.558897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.559158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.559178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.563800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.564052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.564071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.568408] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.568678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.568698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.572852] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.573128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.573147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.577124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.577407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.577426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.581506] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.581773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.581792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.585939] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.586220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.586238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.590443] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.590725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.590744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.594913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.595190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.595213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.599141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.599408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.599428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.603547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.603815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.603834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.608080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.608347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.608366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.613226] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.613493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.613512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.618277] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.618563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.618582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.623920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.624178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.624197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.628679] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.628950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.628969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.633318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.094 [2024-12-13 06:41:43.633598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.094 [2024-12-13 06:41:43.633617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.094 [2024-12-13 06:41:43.637869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.638143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.638162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.642703] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.642980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.643000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.647858] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.648131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.648150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.653024] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.653295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.653314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.658234] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.658508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.658527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.662774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.663040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.663058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.667287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.667548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.667567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.671584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.671852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.671872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.675993] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.676253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.676272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.680528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.680797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.680816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.685039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.685311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.685330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.689630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.689900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.689919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.693873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.694146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.694165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.698109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.698386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.698405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.702351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.702633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.702652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.706560] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.706828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.706848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.710780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.711053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.711075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.715017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.715282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.715304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.719407] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.719689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.719708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.723941] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.724211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.724229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.728148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.728420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.728439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.732658] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.732938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.732957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.737254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.737527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.737546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.741854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.742145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.742164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.095 [2024-12-13 06:41:43.746526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.095 [2024-12-13 06:41:43.746803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.095 [2024-12-13 06:41:43.746823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.355 [2024-12-13 06:41:43.751120] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.751407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.751427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.755746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.756020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.756039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.760228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.760507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.760526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.764813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.765083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.765102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.769240] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.769517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.769536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.773512] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.773790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.773808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.778187] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.778482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.778501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.782798] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.783070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.783088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.787340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.787622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.787642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.791797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.792062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.792081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.796636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.796913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.796933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.801557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.801824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.801843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.805955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.806234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.806253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.810517] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.810796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.810816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.815002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.815273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.815292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.819231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.819518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.819538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.823369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.823649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.823668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.827537] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.827811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.827830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.831697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.831970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.831992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.835917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.836195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.836215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.840058] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.840326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.840345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.844251] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.844543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.844562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.848369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.848645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.848664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.852599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.852886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.857019] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.857296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.857315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.356 [2024-12-13 06:41:43.861164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.356 [2024-12-13 06:41:43.861442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.356 [2024-12-13 06:41:43.861467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.865285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.865568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.865587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.869401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.869688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.869707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.873545] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.873820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.873839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.877694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.877967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.877986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.881818] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.882088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.882107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.885967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.886233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.886252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.890115] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.890389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.890408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.894633] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.894911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.894930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.898954] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.899230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.899249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.903348] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.903631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.903650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.908030] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.908297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.908316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.913099] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.913359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.913378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.917930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.918200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.918219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.922541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.922808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.922827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.927123] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.927396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.927415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.931644] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.931908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.931928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.936204] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.936504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.936523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.940728] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.941008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.941027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.945460] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.945732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.945755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.949978] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.950232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.950251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.954647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.954924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.954944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.958826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.959103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.959122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.962992] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.963267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.963287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.967138] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.967415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.967434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.971327] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.971612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.971631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.975501] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.975771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.975790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.979656] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.357 [2024-12-13 06:41:43.979929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.357 [2024-12-13 06:41:43.979948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.357 [2024-12-13 06:41:43.983786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:43.984058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:43.984077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:43.987970] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:43.988245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:43.988265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:43.992114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:43.992389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:43.992408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:43.996281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:43.996551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:43.996570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:44.000502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:44.000780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:44.000800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:44.004697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.358 [2024-12-13 06:41:44.004976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.358 [2024-12-13 06:41:44.004996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.358 [2024-12-13 06:41:44.009005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.009275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.009295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.013525] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.013810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.013829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.018608] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.018867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.018886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.023639] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.023903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.023923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.028508] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.028767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.028787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.033503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.033758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.033777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.038316] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.038592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.038612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.042742] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.043009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.043028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.048110] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.048388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.048407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.053022] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.053298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.053317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.057816] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.058099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.058118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.062376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.062654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.062678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.066907] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.067184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.067203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.071259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.618 [2024-12-13 06:41:44.071546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.618 [2024-12-13 06:41:44.071565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.618 [2024-12-13 06:41:44.075637] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.075906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.075925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.080056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.080335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.080354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.084556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.084840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.084858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.089071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.089346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.089365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.093582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.093849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.093868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.098181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.098461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.098481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.102387] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.102658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.102681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.106850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.107120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.107139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.111362] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.111641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.111661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.116486] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.116767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.116787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.121676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.121940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.121960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.126400] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.126680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.126699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.130717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.130991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.131010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.135153] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.135427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.135446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.139418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.139696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.139715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.143706] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.143980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.144000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.148118] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.148394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.148414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.152766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.153036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.153055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.157717] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.157969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.157988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.162599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.162865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.162884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.167393] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.167651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.167670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.172629] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.172826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.172845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.177342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.177616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.177635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.181763] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.182033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.182052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.186029] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.186302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.186321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.190245] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.190533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.190552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.194670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.194946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.194964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.619 [2024-12-13 06:41:44.199260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.619 [2024-12-13 06:41:44.199560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.619 [2024-12-13 06:41:44.199579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.205186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.205508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.205528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.211519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.211870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.211889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.218148] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.218497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.218516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.225900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.226253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.226272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.232909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.233282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.233305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.240333] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.240635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.240654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.247345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.247713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.247732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.254432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.254750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.254769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.261528] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.261814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.261834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:52.620 [2024-12-13 06:41:44.268320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.620 [2024-12-13 06:41:44.268664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.620 [2024-12-13 06:41:44.268684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:35:52.879 [2024-12-13 06:41:44.275965] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.879 [2024-12-13 06:41:44.276320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.879 [2024-12-13 06:41:44.276340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:35:52.879 [2024-12-13 06:41:44.282908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0xd01100) with pdu=0x200016eff3c8 00:35:52.879 [2024-12-13 06:41:44.284533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:35:52.879 [2024-12-13 06:41:44.284553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:35:52.879 6386.00 IOPS, 798.25 MiB/s 00:35:52.879 Latency(us) 00:35:52.879 [2024-12-13T05:41:44.533Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.879 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:52.879 nvme0n1 : 2.00 6378.29 797.29 0.00 0.00 2503.68 1825.65 14105.84 00:35:52.879 [2024-12-13T05:41:44.533Z] =================================================================================================================== 00:35:52.879 [2024-12-13T05:41:44.533Z] Total : 6378.29 797.29 0.00 0.00 2503.68 1825.65 14105.84 00:35:52.879 { 00:35:52.879 "results": [ 00:35:52.879 { 00:35:52.879 "job": "nvme0n1", 00:35:52.879 "core_mask": "0x2", 00:35:52.879 "workload": "randwrite", 00:35:52.879 "status": "finished", 00:35:52.879 "queue_depth": 16, 00:35:52.879 "io_size": 131072, 00:35:52.879 "runtime": 2.004925, 00:35:52.879 "iops": 6378.29345237353, 00:35:52.879 "mibps": 797.2866815466913, 00:35:52.879 "io_failed": 0, 00:35:52.879 "io_timeout": 0, 00:35:52.879 "avg_latency_us": 2503.6765682112696, 00:35:52.879 "min_latency_us": 1825.6457142857143, 00:35:52.879 "max_latency_us": 14105.843809523809 00:35:52.879 } 00:35:52.879 ], 00:35:52.879 "core_count": 1 00:35:52.879 } 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:35:52.879 | .driver_specific 00:35:52.879 | .nvme_error 00:35:52.879 | .status_code 00:35:52.879 | .command_transient_transport_error' 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 413 > 0 )) 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1197880 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1197880 ']' 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1197880 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.879 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1197880 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1197880' 00:35:53.138 killing process with pid 1197880 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1197880 00:35:53.138 Received shutdown signal, test time was about 2.000000 seconds 00:35:53.138 00:35:53.138 Latency(us) 00:35:53.138 [2024-12-13T05:41:44.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:53.138 [2024-12-13T05:41:44.792Z] =================================================================================================================== 00:35:53.138 [2024-12-13T05:41:44.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1197880 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1196069 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1196069 ']' 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1196069 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1196069 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1196069' 00:35:53.138 killing process with pid 1196069 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1196069 00:35:53.138 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1196069 00:35:53.397 00:35:53.397 real 0m14.206s 00:35:53.397 user 0m27.191s 00:35:53.397 sys 0m4.595s 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:53.397 ************************************ 00:35:53.397 END TEST nvmf_digest_error 00:35:53.397 ************************************ 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:53.397 06:41:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:53.397 rmmod nvme_tcp 00:35:53.397 rmmod nvme_fabrics 00:35:53.397 rmmod nvme_keyring 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 1196069 ']' 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 1196069 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1196069 ']' 00:35:53.397 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1196069 00:35:53.398 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1196069) - No such process 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1196069 is not found' 00:35:53.398 Process with pid 1196069 is not found 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:53.398 06:41:45 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:55.933 00:35:55.933 real 0m36.303s 00:35:55.933 user 0m55.350s 00:35:55.933 sys 0m13.620s 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.933 ************************************ 00:35:55.933 END TEST nvmf_digest 00:35:55.933 ************************************ 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:55.933 ************************************ 00:35:55.933 START TEST nvmf_bdevperf 00:35:55.933 ************************************ 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:35:55.933 * Looking for test storage... 00:35:55.933 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:55.933 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.934 --rc genhtml_branch_coverage=1 00:35:55.934 --rc genhtml_function_coverage=1 00:35:55.934 --rc genhtml_legend=1 00:35:55.934 --rc geninfo_all_blocks=1 00:35:55.934 --rc geninfo_unexecuted_blocks=1 00:35:55.934 00:35:55.934 ' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.934 --rc genhtml_branch_coverage=1 00:35:55.934 --rc genhtml_function_coverage=1 00:35:55.934 --rc genhtml_legend=1 00:35:55.934 --rc geninfo_all_blocks=1 00:35:55.934 --rc geninfo_unexecuted_blocks=1 00:35:55.934 00:35:55.934 ' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.934 --rc genhtml_branch_coverage=1 00:35:55.934 --rc genhtml_function_coverage=1 00:35:55.934 --rc genhtml_legend=1 00:35:55.934 --rc geninfo_all_blocks=1 00:35:55.934 --rc geninfo_unexecuted_blocks=1 00:35:55.934 00:35:55.934 ' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:55.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:55.934 --rc genhtml_branch_coverage=1 00:35:55.934 --rc genhtml_function_coverage=1 00:35:55.934 --rc genhtml_legend=1 00:35:55.934 --rc geninfo_all_blocks=1 00:35:55.934 --rc geninfo_unexecuted_blocks=1 00:35:55.934 00:35:55.934 ' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:55.934 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:35:55.934 06:41:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:02.506 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:02.506 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:02.506 Found net devices under 0000:af:00.0: cvl_0_0 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:02.506 Found net devices under 0000:af:00.1: cvl_0_1 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:02.506 06:41:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:02.506 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:02.506 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.397 ms 00:36:02.506 00:36:02.506 --- 10.0.0.2 ping statistics --- 00:36:02.506 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.506 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:36:02.506 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:02.506 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:02.506 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:36:02.507 00:36:02.507 --- 10.0.0.1 ping statistics --- 00:36:02.507 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:02.507 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1201823 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1201823 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1201823 ']' 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:02.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 [2024-12-13 06:41:53.291159] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:02.507 [2024-12-13 06:41:53.291206] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:02.507 [2024-12-13 06:41:53.371154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:02.507 [2024-12-13 06:41:53.393999] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:02.507 [2024-12-13 06:41:53.394034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:02.507 [2024-12-13 06:41:53.394041] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:02.507 [2024-12-13 06:41:53.394048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:02.507 [2024-12-13 06:41:53.394053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:02.507 [2024-12-13 06:41:53.395403] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:02.507 [2024-12-13 06:41:53.395433] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.507 [2024-12-13 06:41:53.395434] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 [2024-12-13 06:41:53.527353] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 Malloc0 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:02.507 [2024-12-13 06:41:53.601682] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:02.507 { 00:36:02.507 "params": { 00:36:02.507 "name": "Nvme$subsystem", 00:36:02.507 "trtype": "$TEST_TRANSPORT", 00:36:02.507 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:02.507 "adrfam": "ipv4", 00:36:02.507 "trsvcid": "$NVMF_PORT", 00:36:02.507 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:02.507 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:02.507 "hdgst": ${hdgst:-false}, 00:36:02.507 "ddgst": ${ddgst:-false} 00:36:02.507 }, 00:36:02.507 "method": "bdev_nvme_attach_controller" 00:36:02.507 } 00:36:02.507 EOF 00:36:02.507 )") 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:02.507 06:41:53 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:02.507 "params": { 00:36:02.507 "name": "Nvme1", 00:36:02.507 "trtype": "tcp", 00:36:02.507 "traddr": "10.0.0.2", 00:36:02.507 "adrfam": "ipv4", 00:36:02.507 "trsvcid": "4420", 00:36:02.507 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:02.507 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:02.507 "hdgst": false, 00:36:02.507 "ddgst": false 00:36:02.507 }, 00:36:02.507 "method": "bdev_nvme_attach_controller" 00:36:02.507 }' 00:36:02.507 [2024-12-13 06:41:53.654850] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:02.507 [2024-12-13 06:41:53.654893] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201849 ] 00:36:02.507 [2024-12-13 06:41:53.727780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.507 [2024-12-13 06:41:53.750415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:02.507 Running I/O for 1 seconds... 00:36:03.444 11243.00 IOPS, 43.92 MiB/s 00:36:03.444 Latency(us) 00:36:03.444 [2024-12-13T05:41:55.098Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:03.444 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:03.444 Verification LBA range: start 0x0 length 0x4000 00:36:03.444 Nvme1n1 : 1.01 11312.73 44.19 0.00 0.00 11271.13 1146.88 12420.63 00:36:03.444 [2024-12-13T05:41:55.098Z] =================================================================================================================== 00:36:03.444 [2024-12-13T05:41:55.098Z] Total : 11312.73 44.19 0.00 0.00 11271.13 1146.88 12420.63 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1202090 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:03.710 { 00:36:03.710 "params": { 00:36:03.710 "name": "Nvme$subsystem", 00:36:03.710 "trtype": "$TEST_TRANSPORT", 00:36:03.710 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.710 "adrfam": "ipv4", 00:36:03.710 "trsvcid": "$NVMF_PORT", 00:36:03.710 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.710 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.710 "hdgst": ${hdgst:-false}, 00:36:03.710 "ddgst": ${ddgst:-false} 00:36:03.710 }, 00:36:03.710 "method": "bdev_nvme_attach_controller" 00:36:03.710 } 00:36:03.710 EOF 00:36:03.710 )") 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:03.710 06:41:55 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:03.710 "params": { 00:36:03.710 "name": "Nvme1", 00:36:03.710 "trtype": "tcp", 00:36:03.710 "traddr": "10.0.0.2", 00:36:03.710 "adrfam": "ipv4", 00:36:03.710 "trsvcid": "4420", 00:36:03.710 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:03.710 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:03.710 "hdgst": false, 00:36:03.710 "ddgst": false 00:36:03.710 }, 00:36:03.710 "method": "bdev_nvme_attach_controller" 00:36:03.710 }' 00:36:03.710 [2024-12-13 06:41:55.226821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:03.710 [2024-12-13 06:41:55.226870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202090 ] 00:36:03.710 [2024-12-13 06:41:55.303247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:03.710 [2024-12-13 06:41:55.323313] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:03.969 Running I/O for 15 seconds... 00:36:05.841 11322.00 IOPS, 44.23 MiB/s [2024-12-13T05:41:58.442Z] 11454.00 IOPS, 44.74 MiB/s [2024-12-13T05:41:58.442Z] 06:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1201823 00:36:06.788 06:41:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:06.788 [2024-12-13 06:41:58.198479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:117312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:117320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:117328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:117336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:117344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:117352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:117360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:117368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:117384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:117392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:117400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:117408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:117416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:117424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:117432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:117440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:117448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:117456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:117464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:117480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:117488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:117496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:117504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:117512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:117520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:117528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:117536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.198991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:117544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.198999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:117552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:117568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:117576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:117584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.788 [2024-12-13 06:41:58.199078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:117592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.788 [2024-12-13 06:41:58.199085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:117600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:117608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:117616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:117624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:117632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:117640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:117648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:117656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:117664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:117672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:117680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:117688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:117696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:117704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:117712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:117720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:117728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:117736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:117744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:117752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:117760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:117768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:117776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:117784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:117792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:117800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:117808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:117816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:117824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:117832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:117840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:117848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:117856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:117864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:117872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:117880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:117888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:117896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:117904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:117912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.789 [2024-12-13 06:41:58.199789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.789 [2024-12-13 06:41:58.199797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:117920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:117928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:117936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:117944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:117952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:117960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:117968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:117976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:117984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:117992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:118000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:118008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:118016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.199986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:118024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.199995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:118032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:118040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:118048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:118056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:118064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:118072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:118080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:118088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:118096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:118104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:118112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:118120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:118128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:118136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:118144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:118160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:118168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:118184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:118192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:118200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:118208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:118216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:118224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:118232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:118240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.790 [2024-12-13 06:41:58.200387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.790 [2024-12-13 06:41:58.200395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:118248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:118256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:118264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:118280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:118296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:118304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:118312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:118320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:06.791 [2024-12-13 06:41:58.200536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.200543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1578920 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.200552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:06.791 [2024-12-13 06:41:58.200558] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:06.791 [2024-12-13 06:41:58.200565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118328 len:8 PRP1 0x0 PRP2 0x0 00:36:06.791 [2024-12-13 06:41:58.200572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:06.791 [2024-12-13 06:41:58.203407] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.203469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.204067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.204083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.204092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.204265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.204437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.204447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.204462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.204470] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.216672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.217136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.217182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.217208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.217741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.217915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.217924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.217931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.217938] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.229511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.229891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.229908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.229916] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.230084] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.230252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.230260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.230266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.230273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.242461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.242763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.242779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.242789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.242956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.243128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.243136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.243142] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.243148] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.255322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.255608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.255624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.255632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.255799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.255966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.255974] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.255980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.255987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.268214] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.268557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.268574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.268581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.268748] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.268916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.268924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.268930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.268936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.791 [2024-12-13 06:41:58.281124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.791 [2024-12-13 06:41:58.281587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.791 [2024-12-13 06:41:58.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.791 [2024-12-13 06:41:58.281611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.791 [2024-12-13 06:41:58.281778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.791 [2024-12-13 06:41:58.281950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.791 [2024-12-13 06:41:58.281958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.791 [2024-12-13 06:41:58.281964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.791 [2024-12-13 06:41:58.281971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.294006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.294360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.294376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.294383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.294557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.294726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.294734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.294740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.294746] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.306851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.307268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.307284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.307291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.307466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.307633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.307641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.307648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.307655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.319792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.320240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.320256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.320263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.320430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.320603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.320612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.320622] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.320628] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.332674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.332972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.332987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.332995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.333162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.333330] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.333338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.333344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.333350] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.345614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.345987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.346003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.346010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.346177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.346344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.346351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.346357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.346364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.358535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.358892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.358948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.358977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.359581] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.360113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.360122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.360129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.360135] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.371358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.371664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.371681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.371688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.371854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.372022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.372030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.372037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.372044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.384276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.384632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.384649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.384656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.384822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.792 [2024-12-13 06:41:58.384989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.792 [2024-12-13 06:41:58.384997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.792 [2024-12-13 06:41:58.385003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.792 [2024-12-13 06:41:58.385009] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.792 [2024-12-13 06:41:58.397118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.792 [2024-12-13 06:41:58.397538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.792 [2024-12-13 06:41:58.397554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.792 [2024-12-13 06:41:58.397561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.792 [2024-12-13 06:41:58.397729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.793 [2024-12-13 06:41:58.397900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.793 [2024-12-13 06:41:58.397907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.793 [2024-12-13 06:41:58.397913] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.793 [2024-12-13 06:41:58.397919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.793 [2024-12-13 06:41:58.410031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.793 [2024-12-13 06:41:58.410453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.793 [2024-12-13 06:41:58.410470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.793 [2024-12-13 06:41:58.410497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.793 [2024-12-13 06:41:58.410669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.793 [2024-12-13 06:41:58.410842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.793 [2024-12-13 06:41:58.410850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.793 [2024-12-13 06:41:58.410856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.793 [2024-12-13 06:41:58.410863] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:06.793 [2024-12-13 06:41:58.422822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:06.793 [2024-12-13 06:41:58.423161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:06.793 [2024-12-13 06:41:58.423177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:06.793 [2024-12-13 06:41:58.423184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:06.793 [2024-12-13 06:41:58.423351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:06.793 [2024-12-13 06:41:58.423524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:06.793 [2024-12-13 06:41:58.423532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:06.793 [2024-12-13 06:41:58.423538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:06.793 [2024-12-13 06:41:58.423544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.152 [2024-12-13 06:41:58.436276] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.152 [2024-12-13 06:41:58.436739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.152 [2024-12-13 06:41:58.436758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.152 [2024-12-13 06:41:58.436766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.152 [2024-12-13 06:41:58.436974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.152 [2024-12-13 06:41:58.437146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.152 [2024-12-13 06:41:58.437154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.152 [2024-12-13 06:41:58.437161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.152 [2024-12-13 06:41:58.437167] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.152 [2024-12-13 06:41:58.449344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.152 [2024-12-13 06:41:58.449756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.152 [2024-12-13 06:41:58.449774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.152 [2024-12-13 06:41:58.449782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.152 [2024-12-13 06:41:58.449956] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.152 [2024-12-13 06:41:58.450132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.152 [2024-12-13 06:41:58.450140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.152 [2024-12-13 06:41:58.450147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.450153] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.462358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.462804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.462849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.462872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.463360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.463538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.463548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.463554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.463561] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.475290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.475740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.475786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.475809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.476305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.476479] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.476487] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.476493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.476500] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.488175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.488572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.488588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.488595] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.488763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.488931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.488939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.488949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.488955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 10349.33 IOPS, 40.43 MiB/s [2024-12-13T05:41:58.807Z] [2024-12-13 06:41:58.500961] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.501313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.501358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.501381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.501980] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.502401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.502409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.502416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.502422] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.513913] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.514344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.514389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.514413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.515011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.515507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.515515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.515521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.515527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.526840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.527234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.527251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.527258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.527425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.527599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.527608] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.527614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.527620] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.539739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.540154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.540170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.540177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.540344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.540533] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.540541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.540548] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.540554] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.552659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.553058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.553074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.553081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.553249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.553417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.553425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.553431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.553438] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.565577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.566007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.566050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.566073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.566510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.566679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.566687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.566693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.153 [2024-12-13 06:41:58.566699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.153 [2024-12-13 06:41:58.578357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.153 [2024-12-13 06:41:58.578723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.153 [2024-12-13 06:41:58.578739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.153 [2024-12-13 06:41:58.578750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.153 [2024-12-13 06:41:58.578916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.153 [2024-12-13 06:41:58.579083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.153 [2024-12-13 06:41:58.579091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.153 [2024-12-13 06:41:58.579097] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.579104] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.591084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.591500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.591516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.591523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.591690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.591858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.591865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.591872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.591878] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.603912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.604312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.604356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.604380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.604817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.604985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.604993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.604999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.605005] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.616758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.617197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.617213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.617220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.617387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.617564] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.617572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.617579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.617585] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.629637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.630055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.630071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.630078] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.630244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.630415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.630423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.630429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.630435] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.642433] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.642789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.642805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.642812] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.642979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.643146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.643154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.643160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.643166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.655326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.655739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.655784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.655806] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.656336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.656711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.656728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.656746] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.656759] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.670044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.670576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.670598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.670609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.670851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.671094] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.671105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.671114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.671123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.682978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.683423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.683439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.683446] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.683620] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.683787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.683795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.683801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.683807] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.695855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.696214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.696230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.696238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.696405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.696581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.696589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.696595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.154 [2024-12-13 06:41:58.696601] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.154 [2024-12-13 06:41:58.708701] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.154 [2024-12-13 06:41:58.709161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.154 [2024-12-13 06:41:58.709207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.154 [2024-12-13 06:41:58.709230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.154 [2024-12-13 06:41:58.709767] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.154 [2024-12-13 06:41:58.709941] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.154 [2024-12-13 06:41:58.709949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.154 [2024-12-13 06:41:58.709956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.155 [2024-12-13 06:41:58.709963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.155 [2024-12-13 06:41:58.721605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.155 [2024-12-13 06:41:58.722045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.155 [2024-12-13 06:41:58.722062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.155 [2024-12-13 06:41:58.722070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.155 [2024-12-13 06:41:58.722243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.155 [2024-12-13 06:41:58.722419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.155 [2024-12-13 06:41:58.722427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.155 [2024-12-13 06:41:58.722433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.155 [2024-12-13 06:41:58.722440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.155 [2024-12-13 06:41:58.734555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.155 [2024-12-13 06:41:58.734845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.155 [2024-12-13 06:41:58.734889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.155 [2024-12-13 06:41:58.734911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.155 [2024-12-13 06:41:58.735505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.155 [2024-12-13 06:41:58.736018] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.155 [2024-12-13 06:41:58.736026] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.155 [2024-12-13 06:41:58.736033] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.155 [2024-12-13 06:41:58.736039] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.155 [2024-12-13 06:41:58.747539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.155 [2024-12-13 06:41:58.747944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.155 [2024-12-13 06:41:58.747960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.155 [2024-12-13 06:41:58.747971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.155 [2024-12-13 06:41:58.748143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.155 [2024-12-13 06:41:58.748315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.155 [2024-12-13 06:41:58.748323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.155 [2024-12-13 06:41:58.748329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.155 [2024-12-13 06:41:58.748336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.155 [2024-12-13 06:41:58.760804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.155 [2024-12-13 06:41:58.761237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.155 [2024-12-13 06:41:58.761254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.155 [2024-12-13 06:41:58.761262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.155 [2024-12-13 06:41:58.761439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.155 [2024-12-13 06:41:58.761624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.155 [2024-12-13 06:41:58.761633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.155 [2024-12-13 06:41:58.761640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.155 [2024-12-13 06:41:58.761646] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.444 [2024-12-13 06:41:58.773886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.444 [2024-12-13 06:41:58.774270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.444 [2024-12-13 06:41:58.774287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.444 [2024-12-13 06:41:58.774295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.444 [2024-12-13 06:41:58.774476] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.444 [2024-12-13 06:41:58.774650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.444 [2024-12-13 06:41:58.774658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.444 [2024-12-13 06:41:58.774664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.444 [2024-12-13 06:41:58.774670] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.444 [2024-12-13 06:41:58.786887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.444 [2024-12-13 06:41:58.787326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.444 [2024-12-13 06:41:58.787364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.444 [2024-12-13 06:41:58.787390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.444 [2024-12-13 06:41:58.788004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.444 [2024-12-13 06:41:58.788176] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.444 [2024-12-13 06:41:58.788184] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.444 [2024-12-13 06:41:58.788190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.444 [2024-12-13 06:41:58.788197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.444 [2024-12-13 06:41:58.801959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.444 [2024-12-13 06:41:58.802478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.444 [2024-12-13 06:41:58.802524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.444 [2024-12-13 06:41:58.802547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.444 [2024-12-13 06:41:58.803061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.444 [2024-12-13 06:41:58.803314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.444 [2024-12-13 06:41:58.803326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.444 [2024-12-13 06:41:58.803335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.444 [2024-12-13 06:41:58.803345] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.444 [2024-12-13 06:41:58.815016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.444 [2024-12-13 06:41:58.815424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.444 [2024-12-13 06:41:58.815440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.444 [2024-12-13 06:41:58.815452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.444 [2024-12-13 06:41:58.815626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.444 [2024-12-13 06:41:58.815798] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.444 [2024-12-13 06:41:58.815806] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.444 [2024-12-13 06:41:58.815812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.444 [2024-12-13 06:41:58.815819] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.828004] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.828459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.828476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.828483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.828655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.828832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.828840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.828849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.828856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.840919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.841358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.841397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.841423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.841990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.842163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.842171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.842177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.842184] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.853800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.854224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.854269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.854291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.854888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.855260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.855268] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.855274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.855281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.866556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.866949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.866965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.866972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.867130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.867288] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.867295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.867301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.867307] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.879340] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.879788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.879834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.879857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.880308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.880481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.880489] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.880495] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.880501] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.892074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.892496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.892512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.892519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.892678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.892835] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.892843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.892849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.892855] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.904874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.905290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.905306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.905312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.905492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.905659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.905667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.905673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.905680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.917657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.918075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.918090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.918100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.918259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.918417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.918425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.918431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.918437] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.930412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.930856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.930892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.930917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.931454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.931622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.931630] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.931637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.931643] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.943209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.445 [2024-12-13 06:41:58.943625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.445 [2024-12-13 06:41:58.943641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.445 [2024-12-13 06:41:58.943648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.445 [2024-12-13 06:41:58.943819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.445 [2024-12-13 06:41:58.943978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.445 [2024-12-13 06:41:58.943985] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.445 [2024-12-13 06:41:58.943991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.445 [2024-12-13 06:41:58.943997] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.445 [2024-12-13 06:41:58.956040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:58.956361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:58.956376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:58.956383] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:58.956567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:58.956738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:58.956746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:58.956752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:58.956758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:58.968767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:58.969198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:58.969242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:58.969266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:58.969863] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:58.970292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:58.970309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:58.970323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:58.970336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:58.983618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:58.984142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:58.984185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:58.984208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:58.984805] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:58.985391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:58.985415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:58.985436] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:58.985472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:58.996605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:58.996976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:58.996992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:58.997000] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:58.997172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:58.997347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:58.997355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:58.997365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:58.997371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.009419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.009844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.009890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.009914] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.010343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.010524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.010532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.010539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.010545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.024594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.025140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.025163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.025652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.025907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.025918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.025927] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.025936] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.037483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.037918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.037963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.037985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.038419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.038593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.038601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.038607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.038614] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.050398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.050845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.050889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.050912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.051412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.051644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.051662] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.051676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.051689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.065137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.065655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.065677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.065687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.065941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.066195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.066206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.066215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.066225] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.446 [2024-12-13 06:41:59.078251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.446 [2024-12-13 06:41:59.078612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.446 [2024-12-13 06:41:59.078628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.446 [2024-12-13 06:41:59.078636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.446 [2024-12-13 06:41:59.078807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.446 [2024-12-13 06:41:59.078979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.446 [2024-12-13 06:41:59.078987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.446 [2024-12-13 06:41:59.078993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.446 [2024-12-13 06:41:59.078999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.091050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.091440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.091460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.091470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.091653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.091821] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.707 [2024-12-13 06:41:59.091829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.707 [2024-12-13 06:41:59.091835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.707 [2024-12-13 06:41:59.091841] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.103971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.104405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.104421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.104428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.104606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.104779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.707 [2024-12-13 06:41:59.104787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.707 [2024-12-13 06:41:59.104793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.707 [2024-12-13 06:41:59.104799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.116778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.117097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.117112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.117119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.117277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.117435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.707 [2024-12-13 06:41:59.117442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.707 [2024-12-13 06:41:59.117453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.707 [2024-12-13 06:41:59.117459] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.129638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.130026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.130041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.130048] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.130207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.130368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.707 [2024-12-13 06:41:59.130375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.707 [2024-12-13 06:41:59.130381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.707 [2024-12-13 06:41:59.130388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.142454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.142868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.142883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.142890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.143049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.143207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.707 [2024-12-13 06:41:59.143214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.707 [2024-12-13 06:41:59.143220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.707 [2024-12-13 06:41:59.143226] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.707 [2024-12-13 06:41:59.155338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.707 [2024-12-13 06:41:59.155772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.707 [2024-12-13 06:41:59.155788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.707 [2024-12-13 06:41:59.155795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.707 [2024-12-13 06:41:59.155962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.707 [2024-12-13 06:41:59.156130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.156137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.156143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.156150] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.168171] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.168585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.168601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.168607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.168766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.168924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.168931] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.168942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.168948] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.180897] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.181308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.181323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.181330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.181509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.181677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.181684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.181690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.181697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.193754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.194182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.194225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.194249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.194744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.194912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.194920] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.194926] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.194933] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.206820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.207144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.207160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.207167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.207333] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.207504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.207512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.207518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.207524] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.219800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.220246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.220262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.220269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.220436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.220750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.220759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.220766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.220773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.232757] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.233130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.233147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.233154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.233327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.233503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.233512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.233519] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.233525] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.245531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.245978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.246022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.246045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.246643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.247204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.247211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.247217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.247224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.258364] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.258809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.258854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.258885] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.259290] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.259463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.259472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.259478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.259484] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.271187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.271605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.271621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.271628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.271786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.271944] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.708 [2024-12-13 06:41:59.271952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.708 [2024-12-13 06:41:59.271957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.708 [2024-12-13 06:41:59.271963] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.708 [2024-12-13 06:41:59.284270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.708 [2024-12-13 06:41:59.284708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.708 [2024-12-13 06:41:59.284725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.708 [2024-12-13 06:41:59.284732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.708 [2024-12-13 06:41:59.284904] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.708 [2024-12-13 06:41:59.285075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.285083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.285089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.285095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.709 [2024-12-13 06:41:59.296987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.709 [2024-12-13 06:41:59.297430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.709 [2024-12-13 06:41:59.297479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.709 [2024-12-13 06:41:59.297505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.709 [2024-12-13 06:41:59.298043] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.709 [2024-12-13 06:41:59.298214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.298222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.298228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.298234] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.709 [2024-12-13 06:41:59.309748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.709 [2024-12-13 06:41:59.310165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.709 [2024-12-13 06:41:59.310180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.709 [2024-12-13 06:41:59.310187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.709 [2024-12-13 06:41:59.310345] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.709 [2024-12-13 06:41:59.310527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.310535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.310542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.310548] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.709 [2024-12-13 06:41:59.322590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.709 [2024-12-13 06:41:59.323018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.709 [2024-12-13 06:41:59.323063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.709 [2024-12-13 06:41:59.323086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.709 [2024-12-13 06:41:59.323490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.709 [2024-12-13 06:41:59.323658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.323666] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.323672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.323679] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.709 [2024-12-13 06:41:59.335312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.709 [2024-12-13 06:41:59.335759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.709 [2024-12-13 06:41:59.335804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.709 [2024-12-13 06:41:59.335827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.709 [2024-12-13 06:41:59.336321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.709 [2024-12-13 06:41:59.336495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.336503] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.336513] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.336519] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.709 [2024-12-13 06:41:59.348077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.709 [2024-12-13 06:41:59.348471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.709 [2024-12-13 06:41:59.348487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.709 [2024-12-13 06:41:59.348494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.709 [2024-12-13 06:41:59.348652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.709 [2024-12-13 06:41:59.348809] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.709 [2024-12-13 06:41:59.348817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.709 [2024-12-13 06:41:59.348823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.709 [2024-12-13 06:41:59.348829] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.969 [2024-12-13 06:41:59.361175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.969 [2024-12-13 06:41:59.361612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.969 [2024-12-13 06:41:59.361656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.969 [2024-12-13 06:41:59.361678] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.969 [2024-12-13 06:41:59.362272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.969 [2024-12-13 06:41:59.362440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.969 [2024-12-13 06:41:59.362453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.969 [2024-12-13 06:41:59.362460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.969 [2024-12-13 06:41:59.362466] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.969 [2024-12-13 06:41:59.374073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.969 [2024-12-13 06:41:59.374490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.969 [2024-12-13 06:41:59.374507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.969 [2024-12-13 06:41:59.374514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.969 [2024-12-13 06:41:59.374682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.374849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.374856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.374862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.374869] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.386854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.387196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.387212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.387219] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.387386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.387558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.387567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.387573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.387579] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.399686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.400125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.400141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.400148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.400315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.400486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.400495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.400501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.400507] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.412526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.412957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.412975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.412982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.413150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.413317] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.413326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.413332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.413338] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.425310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.425587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.425604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.425615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.425782] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.425949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.425957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.425963] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.425969] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.438265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.438617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.438634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.438641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.438808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.438978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.438986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.438992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.438999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.451256] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.451703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.451720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.451727] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.451899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.452071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.452079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.452085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.452091] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.464073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.464428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.464486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.464510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.465092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.465676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.465695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.465708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.465722] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.479068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.479545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.479568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.479578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.479832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.480086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.480098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.480107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.480117] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 [2024-12-13 06:41:59.492067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.492409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.492425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.492432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.492611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.970 [2024-12-13 06:41:59.492783] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.970 [2024-12-13 06:41:59.492792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.970 [2024-12-13 06:41:59.492798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.970 [2024-12-13 06:41:59.492804] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.970 7762.00 IOPS, 30.32 MiB/s [2024-12-13T05:41:59.624Z] [2024-12-13 06:41:59.505036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.970 [2024-12-13 06:41:59.505391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.970 [2024-12-13 06:41:59.505406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.970 [2024-12-13 06:41:59.505414] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.970 [2024-12-13 06:41:59.505593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.505774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.505781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.505790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.505797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.517857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.518222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.518239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.518246] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.518414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.518586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.518594] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.518600] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.518606] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.530755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.531149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.531165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.531172] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.531338] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.531510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.531519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.531525] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.531531] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.543663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.543987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.544003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.544010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.544177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.544345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.544352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.544358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.544365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.556452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.556846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.556890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.556913] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.557508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.558056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.558064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.558071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.558077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.569298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.569689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.569705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.569712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.569879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.570046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.570053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.570059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.570066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.582162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.582521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.582538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.582545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.582711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.582879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.582887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.582894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.582900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.595018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.595420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.595436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.595452] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.595621] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.595787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.595796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.595802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.595808] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.607964] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.608353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.608369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.608375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.608558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.608726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.608734] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.608740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.608747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:07.971 [2024-12-13 06:41:59.621031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:07.971 [2024-12-13 06:41:59.621374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:07.971 [2024-12-13 06:41:59.621390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:07.971 [2024-12-13 06:41:59.621397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:07.971 [2024-12-13 06:41:59.621575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:07.971 [2024-12-13 06:41:59.621747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:07.971 [2024-12-13 06:41:59.621755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:07.971 [2024-12-13 06:41:59.621761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:07.971 [2024-12-13 06:41:59.621768] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.633974] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.634314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.634330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.232 [2024-12-13 06:41:59.634338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.232 [2024-12-13 06:41:59.634517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.232 [2024-12-13 06:41:59.634687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.232 [2024-12-13 06:41:59.634695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.232 [2024-12-13 06:41:59.634701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.232 [2024-12-13 06:41:59.634708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.646893] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.647254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.647270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.232 [2024-12-13 06:41:59.647277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.232 [2024-12-13 06:41:59.647444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.232 [2024-12-13 06:41:59.647616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.232 [2024-12-13 06:41:59.647624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.232 [2024-12-13 06:41:59.647630] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.232 [2024-12-13 06:41:59.647636] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.659691] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.660102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.660118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.232 [2024-12-13 06:41:59.660125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.232 [2024-12-13 06:41:59.660292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.232 [2024-12-13 06:41:59.660465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.232 [2024-12-13 06:41:59.660473] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.232 [2024-12-13 06:41:59.660479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.232 [2024-12-13 06:41:59.660486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.672634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.673044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.673060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.232 [2024-12-13 06:41:59.673067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.232 [2024-12-13 06:41:59.673234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.232 [2024-12-13 06:41:59.673401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.232 [2024-12-13 06:41:59.673409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.232 [2024-12-13 06:41:59.673418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.232 [2024-12-13 06:41:59.673425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.685498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.685836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.685851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.232 [2024-12-13 06:41:59.685858] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.232 [2024-12-13 06:41:59.686026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.232 [2024-12-13 06:41:59.686193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.232 [2024-12-13 06:41:59.686201] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.232 [2024-12-13 06:41:59.686207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.232 [2024-12-13 06:41:59.686213] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.232 [2024-12-13 06:41:59.698452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.232 [2024-12-13 06:41:59.698750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.232 [2024-12-13 06:41:59.698766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.698774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.698941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.699108] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.699116] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.699122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.699129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.711312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.711735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.711751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.711758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.711925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.712092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.712100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.712106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.712112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.724144] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.724465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.724511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.724534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.725079] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.725246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.725254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.725260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.725266] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.736994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.737404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.737420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.737426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.737599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.737766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.737775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.737781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.737788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.749935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.750340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.750384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.750406] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.751004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.751510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.751519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.751527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.751534] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.762843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.763190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.763206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.763216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.763384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.763560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.763568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.763574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.763581] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.775661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.776096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.776141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.776164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.776763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.777349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.777375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.777396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.777416] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.788524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.788866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.788909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.788932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.789400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.789575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.789583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.789590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.789596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.803077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.803485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.803531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.803555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.233 [2024-12-13 06:41:59.804137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.233 [2024-12-13 06:41:59.804628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.233 [2024-12-13 06:41:59.804640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.233 [2024-12-13 06:41:59.804649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.233 [2024-12-13 06:41:59.804658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.233 [2024-12-13 06:41:59.815996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.233 [2024-12-13 06:41:59.816366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.233 [2024-12-13 06:41:59.816383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.233 [2024-12-13 06:41:59.816390] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.816567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.816739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.816747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.816754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.816760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.234 [2024-12-13 06:41:59.828994] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.234 [2024-12-13 06:41:59.829345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-12-13 06:41:59.829361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.234 [2024-12-13 06:41:59.829368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.829544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.829717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.829725] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.829731] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.829748] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.234 [2024-12-13 06:41:59.841903] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.234 [2024-12-13 06:41:59.842254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-12-13 06:41:59.842270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.234 [2024-12-13 06:41:59.842277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.842445] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.842618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.842627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.842636] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.842642] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.234 [2024-12-13 06:41:59.854895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.234 [2024-12-13 06:41:59.855315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-12-13 06:41:59.855331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.234 [2024-12-13 06:41:59.855338] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.855510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.855678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.855687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.855693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.855699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.234 [2024-12-13 06:41:59.867735] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.234 [2024-12-13 06:41:59.868171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-12-13 06:41:59.868187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.234 [2024-12-13 06:41:59.868194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.868361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.868531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.868540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.868546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.868552] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.234 [2024-12-13 06:41:59.880664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.234 [2024-12-13 06:41:59.881093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.234 [2024-12-13 06:41:59.881109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.234 [2024-12-13 06:41:59.881116] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.234 [2024-12-13 06:41:59.881288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.234 [2024-12-13 06:41:59.881465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.234 [2024-12-13 06:41:59.881474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.234 [2024-12-13 06:41:59.881480] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.234 [2024-12-13 06:41:59.881486] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.494 [2024-12-13 06:41:59.893700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.494 [2024-12-13 06:41:59.894068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.494 [2024-12-13 06:41:59.894085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.494 [2024-12-13 06:41:59.894092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.494 [2024-12-13 06:41:59.894259] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.494 [2024-12-13 06:41:59.894426] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.494 [2024-12-13 06:41:59.894434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.494 [2024-12-13 06:41:59.894440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.894446] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.906421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.906838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.906854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.906861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.907019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.907178] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.907185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.907191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.907197] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.919157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.919486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.919502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.919509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.919683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.919841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.919848] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.919855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.919861] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.931895] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.932317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.932332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.932342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.932523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.932691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.932699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.932705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.932711] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.944622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.945038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.945053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.945060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.945218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.945376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.945383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.945389] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.945395] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.957445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.957922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.957966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.957990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.958584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.959008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.959016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.959022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.959028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.970289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.970700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.970717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.970724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.970891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.971061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.971069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.971075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.971081] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.983201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.983634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.983650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.983657] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.983824] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.983991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.983998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.984004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.984011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:41:59.995987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:41:59.996384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:41:59.996401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:41:59.996408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:41:59.996600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:41:59.996773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:41:59.996781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:41:59.996787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:41:59.996793] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:42:00.009238] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:42:00.009676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.495 [2024-12-13 06:42:00.009694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.495 [2024-12-13 06:42:00.009701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.495 [2024-12-13 06:42:00.009874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.495 [2024-12-13 06:42:00.010046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.495 [2024-12-13 06:42:00.010054] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.495 [2024-12-13 06:42:00.010064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.495 [2024-12-13 06:42:00.010071] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.495 [2024-12-13 06:42:00.022249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.495 [2024-12-13 06:42:00.022581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.022598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.022606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.022779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.022952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.022961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.022967] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.022974] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.035283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.035684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.035700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.035708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.035881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.036053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.036061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.036068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.036075] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.048741] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.049078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.049094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.049101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.049274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.049456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.049465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.049473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.049480] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.061860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.062250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.062266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.062274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.062454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.062627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.062635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.062642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.062649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.074858] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.075265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.075282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.075289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.075468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.075641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.075650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.075657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.075663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.088235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.088730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.088747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.088754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.088928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.089101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.089110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.089117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.089123] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.101347] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.101691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.101707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.101718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.101887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.102054] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.102062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.102068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.102075] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.114459] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.114872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.114888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.114896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.115069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.115241] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.115249] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.115255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.115262] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.127421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.127833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.127850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.127857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.128030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.128202] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.128210] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.128216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.128223] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.496 [2024-12-13 06:42:00.140423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.496 [2024-12-13 06:42:00.140849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.496 [2024-12-13 06:42:00.140866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.496 [2024-12-13 06:42:00.140873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.496 [2024-12-13 06:42:00.141046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.496 [2024-12-13 06:42:00.141222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.496 [2024-12-13 06:42:00.141231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.496 [2024-12-13 06:42:00.141237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.496 [2024-12-13 06:42:00.141243] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.153472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.153908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.153924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.153932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.154104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.154279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.154287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.154293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.154300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.166471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.166923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.166967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.166991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.167460] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.167637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.167645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.167652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.167658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.179470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.179820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.179836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.179843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.180015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.180187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.180195] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.180205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.180212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.192451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.192859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.192903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.192928] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.193453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.193627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.193636] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.193642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.193649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.205400] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.205804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.205820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.205828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.206001] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.206174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.206182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.206189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.206196] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.218332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.218737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.218752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.218759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.218931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.219103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.219112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.219118] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.219124] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.231318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.231724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.231740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.231747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.231920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.232092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.232100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.232107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.232113] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.244289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.244724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.244770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.244794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.245376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.245815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.245834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.245848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.245862] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.259212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.259710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.259731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.757 [2024-12-13 06:42:00.259742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.757 [2024-12-13 06:42:00.259996] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.757 [2024-12-13 06:42:00.260250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.757 [2024-12-13 06:42:00.260261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.757 [2024-12-13 06:42:00.260270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.757 [2024-12-13 06:42:00.260281] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.757 [2024-12-13 06:42:00.272269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.757 [2024-12-13 06:42:00.272683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.757 [2024-12-13 06:42:00.272699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.272710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.272882] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.273053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.273061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.273067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.273073] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.285285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.285697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.285714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.285721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.285893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.286065] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.286073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.286080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.286086] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.298244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.298669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.298686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.298693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.298865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.299041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.299049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.299056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.299062] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.311128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.311500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.311517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.311525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.311698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.311874] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.311883] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.311890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.311896] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.324233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.324659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.324675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.324683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.324855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.325027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.325035] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.325041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.325048] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.337183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.337600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.337648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.337672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.338256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.338799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.338808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.338815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.338821] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.350148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.350572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.350588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.350596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.350769] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.350943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.350951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.350961] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.350968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.363085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.363531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.363575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.363600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.364184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.364776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.364803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.364825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.364848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.376149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.376569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.376586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.376594] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.376766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.376938] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.376946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.376953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.376960] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.389177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.389601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.389618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.758 [2024-12-13 06:42:00.389625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.758 [2024-12-13 06:42:00.389797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.758 [2024-12-13 06:42:00.389970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.758 [2024-12-13 06:42:00.389978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.758 [2024-12-13 06:42:00.389985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.758 [2024-12-13 06:42:00.389991] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:08.758 [2024-12-13 06:42:00.402213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:08.758 [2024-12-13 06:42:00.402566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:08.758 [2024-12-13 06:42:00.402583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:08.759 [2024-12-13 06:42:00.402591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:08.759 [2024-12-13 06:42:00.402764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:08.759 [2024-12-13 06:42:00.402936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:08.759 [2024-12-13 06:42:00.402944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:08.759 [2024-12-13 06:42:00.402951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:08.759 [2024-12-13 06:42:00.402957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.019 [2024-12-13 06:42:00.415250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.019 [2024-12-13 06:42:00.415646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.019 [2024-12-13 06:42:00.415663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.019 [2024-12-13 06:42:00.415671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.019 [2024-12-13 06:42:00.415843] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.019 [2024-12-13 06:42:00.416016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.019 [2024-12-13 06:42:00.416024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.019 [2024-12-13 06:42:00.416030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.019 [2024-12-13 06:42:00.416037] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.019 [2024-12-13 06:42:00.428236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.019 [2024-12-13 06:42:00.428603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.019 [2024-12-13 06:42:00.428620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.019 [2024-12-13 06:42:00.428628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.019 [2024-12-13 06:42:00.428801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.019 [2024-12-13 06:42:00.428973] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.019 [2024-12-13 06:42:00.428981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.019 [2024-12-13 06:42:00.428987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.019 [2024-12-13 06:42:00.428994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.019 [2024-12-13 06:42:00.441145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.019 [2024-12-13 06:42:00.441554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.019 [2024-12-13 06:42:00.441599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.019 [2024-12-13 06:42:00.441637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.019 [2024-12-13 06:42:00.442090] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.019 [2024-12-13 06:42:00.442264] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.019 [2024-12-13 06:42:00.442272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.019 [2024-12-13 06:42:00.442278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.019 [2024-12-13 06:42:00.442285] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.019 [2024-12-13 06:42:00.454323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.019 [2024-12-13 06:42:00.454679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.019 [2024-12-13 06:42:00.454697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.019 [2024-12-13 06:42:00.454705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.019 [2024-12-13 06:42:00.454878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.019 [2024-12-13 06:42:00.455055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.019 [2024-12-13 06:42:00.455064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.019 [2024-12-13 06:42:00.455072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.019 [2024-12-13 06:42:00.455079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.019 [2024-12-13 06:42:00.467328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.019 [2024-12-13 06:42:00.467738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.019 [2024-12-13 06:42:00.467755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.019 [2024-12-13 06:42:00.467763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.019 [2024-12-13 06:42:00.467935] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.019 [2024-12-13 06:42:00.468107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.019 [2024-12-13 06:42:00.468115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.019 [2024-12-13 06:42:00.468122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.468129] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.480338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.480770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.480786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.480794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.480966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.481143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.481150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.481157] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.481163] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.493391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.493836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.493881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.493905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.494372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.494555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.494564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.494571] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.494577] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 6209.60 IOPS, 24.26 MiB/s [2024-12-13T05:42:00.674Z] [2024-12-13 06:42:00.506379] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.506739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.506757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.506765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.506938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.507110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.507118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.507124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.507130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.519263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.519655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.519672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.519679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.519852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.520025] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.520034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.520044] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.520051] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.532273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.532703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.532748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.532773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.533304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.533483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.533492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.533498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.533505] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.545387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.545829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.545873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.545898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.546388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.546571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.546581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.546587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.546594] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.558479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.558899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.558915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.558923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.559094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.559266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.559274] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.559280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.559286] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.571471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.571918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.571963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.571986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.572458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.572632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.572640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.572646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.572653] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.584456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.584905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.584948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.584971] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.585456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.585630] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.585638] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.585645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.585651] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.020 [2024-12-13 06:42:00.597412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.020 [2024-12-13 06:42:00.597804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.020 [2024-12-13 06:42:00.597850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.020 [2024-12-13 06:42:00.597873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.020 [2024-12-13 06:42:00.598377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.020 [2024-12-13 06:42:00.598561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.020 [2024-12-13 06:42:00.598571] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.020 [2024-12-13 06:42:00.598577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.020 [2024-12-13 06:42:00.598584] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.021 [2024-12-13 06:42:00.610327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.021 [2024-12-13 06:42:00.610785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.021 [2024-12-13 06:42:00.610830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.021 [2024-12-13 06:42:00.610861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.021 [2024-12-13 06:42:00.611310] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.021 [2024-12-13 06:42:00.611488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.021 [2024-12-13 06:42:00.611498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.021 [2024-12-13 06:42:00.611504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.021 [2024-12-13 06:42:00.611511] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.021 [2024-12-13 06:42:00.623309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.021 [2024-12-13 06:42:00.623752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.021 [2024-12-13 06:42:00.623768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.021 [2024-12-13 06:42:00.623775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.021 [2024-12-13 06:42:00.623947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.021 [2024-12-13 06:42:00.624119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.021 [2024-12-13 06:42:00.624127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.021 [2024-12-13 06:42:00.624133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.021 [2024-12-13 06:42:00.624140] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.021 [2024-12-13 06:42:00.636357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.021 [2024-12-13 06:42:00.636723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.021 [2024-12-13 06:42:00.636740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.021 [2024-12-13 06:42:00.636747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.021 [2024-12-13 06:42:00.636920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.021 [2024-12-13 06:42:00.637092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.021 [2024-12-13 06:42:00.637100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.021 [2024-12-13 06:42:00.637106] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.021 [2024-12-13 06:42:00.637112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.021 [2024-12-13 06:42:00.649462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.021 [2024-12-13 06:42:00.649879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.021 [2024-12-13 06:42:00.649896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.021 [2024-12-13 06:42:00.649903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.021 [2024-12-13 06:42:00.650075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.021 [2024-12-13 06:42:00.650250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.021 [2024-12-13 06:42:00.650258] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.021 [2024-12-13 06:42:00.650264] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.021 [2024-12-13 06:42:00.650271] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.021 [2024-12-13 06:42:00.662416] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.021 [2024-12-13 06:42:00.662843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.021 [2024-12-13 06:42:00.662859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.021 [2024-12-13 06:42:00.662867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.021 [2024-12-13 06:42:00.663039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.021 [2024-12-13 06:42:00.663211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.021 [2024-12-13 06:42:00.663219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.021 [2024-12-13 06:42:00.663225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.021 [2024-12-13 06:42:00.663231] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.675371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.675820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.675836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.675844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.676016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.676188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.676196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.676203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.676209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.688394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.688774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.688792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.688799] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.688972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.689145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.689153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.689163] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.689170] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.701394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.701858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.701874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.701881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.702054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.702226] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.702234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.702240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.702247] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.714437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.714855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.714898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.714920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.715489] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.715662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.715670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.715676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.715683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.727567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.727902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.727946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.727969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.728491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.728660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.728668] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.728674] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.728680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.740363] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.740798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.740844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.740869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.741461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.741956] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.741964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.741970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.741977] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.753116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.753572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.753589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.753596] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.753764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.753931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.753939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.753945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.753952] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.765923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.766375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.766420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.766443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.766846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.767014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.767022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.767028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.767035] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.778667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.779105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.779121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.779132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.779300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.779474] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.779482] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.779489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.282 [2024-12-13 06:42:00.779496] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.282 [2024-12-13 06:42:00.791699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.282 [2024-12-13 06:42:00.792121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.282 [2024-12-13 06:42:00.792137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.282 [2024-12-13 06:42:00.792144] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.282 [2024-12-13 06:42:00.792316] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.282 [2024-12-13 06:42:00.792499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.282 [2024-12-13 06:42:00.792509] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.282 [2024-12-13 06:42:00.792515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.792521] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.804577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.805000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.805043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.805066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.805663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.806029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.806037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.806043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.806049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.817460] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.817829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.817845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.817852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.818019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.818190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.818198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.818203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.818210] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.830216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.830611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.830628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.830635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.830804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.830971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.830979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.830986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.830992] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.842989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.843424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.843440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.843453] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.843622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.843790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.843798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.843804] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.843810] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.855901] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.856325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.856340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.856347] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.856525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.856694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.856702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.856712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.856719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.868978] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.869361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.869377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.869384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.869562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.869736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.869744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.869751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.869758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.881874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.882273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.882318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.882342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.882834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.883003] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.883011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.883017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.883024] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.894617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.894962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.894978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.894986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.895155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.895322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.895330] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.895336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.895343] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.907436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.283 [2024-12-13 06:42:00.907715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.283 [2024-12-13 06:42:00.907731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.283 [2024-12-13 06:42:00.907739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.283 [2024-12-13 06:42:00.907906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.283 [2024-12-13 06:42:00.908074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.283 [2024-12-13 06:42:00.908082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.283 [2024-12-13 06:42:00.908089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.283 [2024-12-13 06:42:00.908095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.283 [2024-12-13 06:42:00.920478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.284 [2024-12-13 06:42:00.920740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.284 [2024-12-13 06:42:00.920755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.284 [2024-12-13 06:42:00.920763] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.284 [2024-12-13 06:42:00.920930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.284 [2024-12-13 06:42:00.921098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.284 [2024-12-13 06:42:00.921106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.284 [2024-12-13 06:42:00.921113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.284 [2024-12-13 06:42:00.921120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.284 [2024-12-13 06:42:00.933582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.284 [2024-12-13 06:42:00.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.284 [2024-12-13 06:42:00.933926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.284 [2024-12-13 06:42:00.933933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.284 [2024-12-13 06:42:00.934104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.284 [2024-12-13 06:42:00.934281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.284 [2024-12-13 06:42:00.934289] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.284 [2024-12-13 06:42:00.934295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.284 [2024-12-13 06:42:00.934302] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:00.946468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:00.946797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:00.946812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:00.946823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:00.946990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:00.947157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:00.947165] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:00.947171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:00.947177] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:00.959432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:00.959738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:00.959754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:00.959762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:00.959934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:00.960107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:00.960115] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:00.960121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:00.960127] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:00.972524] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:00.972859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:00.972876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:00.972883] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:00.973055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:00.973228] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:00.973236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:00.973242] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:00.973248] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:00.985618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:00.985908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:00.985924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:00.985931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:00.986104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:00.986279] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:00.986287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:00.986293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:00.986300] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:00.998657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:00.998947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:00.998962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:00.998970] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:00.999142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:00.999315] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:00.999323] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:00.999329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:00.999336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:01.011716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:01.012004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:01.012020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:01.012027] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:01.012199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:01.012371] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:01.012379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:01.012386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:01.012392] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:01.024621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:01.024977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:01.025014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:01.025039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:01.025578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:01.025758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:01.025765] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:01.025775] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:01.025781] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:01.037643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:01.038061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:01.038077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:01.038084] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.544 [2024-12-13 06:42:01.038264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.544 [2024-12-13 06:42:01.038437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.544 [2024-12-13 06:42:01.038445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.544 [2024-12-13 06:42:01.038458] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.544 [2024-12-13 06:42:01.038464] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.544 [2024-12-13 06:42:01.050689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.544 [2024-12-13 06:42:01.051067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.544 [2024-12-13 06:42:01.051083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.544 [2024-12-13 06:42:01.051091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.051263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.051435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.051446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.051465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.051476] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.063733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.064138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.064154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.064162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.064334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.064515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.064523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.064530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.064536] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.076764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.077131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.077148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.077155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.077327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.077507] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.077515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.077522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.077528] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.089753] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.090162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.090179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.090186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.090358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.090543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.090552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.090559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.090565] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.102792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.103140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.103183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.103206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.103698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.103872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.103880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.103886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.103893] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.115820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.116192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.116207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.116218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.116390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.116571] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.116580] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.116586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.116593] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.128923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.129211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.129227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.129235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.129408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.129588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.129596] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.129603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.129609] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.142006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.142277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.142294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.142301] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.142485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.142659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.142667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.142673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.142680] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.155066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.155406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.155422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.155429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.155609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.155785] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.155793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.155799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.155805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.168033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.168395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.168411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.168418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.545 [2024-12-13 06:42:01.168601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.545 [2024-12-13 06:42:01.168780] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.545 [2024-12-13 06:42:01.168788] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.545 [2024-12-13 06:42:01.168794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.545 [2024-12-13 06:42:01.168801] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.545 [2024-12-13 06:42:01.181021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.545 [2024-12-13 06:42:01.181291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.545 [2024-12-13 06:42:01.181307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.545 [2024-12-13 06:42:01.181314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.546 [2024-12-13 06:42:01.181494] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.546 [2024-12-13 06:42:01.181667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.546 [2024-12-13 06:42:01.181675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.546 [2024-12-13 06:42:01.181681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.546 [2024-12-13 06:42:01.181687] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1201823 Killed "${NVMF_APP[@]}" "$@" 00:36:09.546 [2024-12-13 06:42:01.194084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.546 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:09.546 [2024-12-13 06:42:01.194420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.546 [2024-12-13 06:42:01.194436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.546 [2024-12-13 06:42:01.194444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.546 [2024-12-13 06:42:01.194623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.546 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:09.546 [2024-12-13 06:42:01.194795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.546 [2024-12-13 06:42:01.194804] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.546 [2024-12-13 06:42:01.194811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.546 [2024-12-13 06:42:01.194817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.546 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:09.546 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:09.546 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=1203238 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 1203238 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1203238 ']' 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:09.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.805 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:09.805 [2024-12-13 06:42:01.207213] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.805 [2024-12-13 06:42:01.207502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.805 [2024-12-13 06:42:01.207519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.207526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.207698] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.207871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.207879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.207887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.207894] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.220390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.220695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.220712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.220720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.220903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.221085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.221098] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.221105] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.221112] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.233690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.233982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.233999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.234006] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.234189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.234372] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.234381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.234388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.234395] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.246789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.247201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.247217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.247224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.247396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.247576] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.247584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.247591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.247597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.250213] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:09.806 [2024-12-13 06:42:01.250251] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:09.806 [2024-12-13 06:42:01.259936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.260343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.260360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.260368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.260547] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.260722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.260733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.260740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.260747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.272949] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.273386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.273403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.273411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.273591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.273765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.273773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.273780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.273786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.286014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.286440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.286461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.286469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.286642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.286815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.286823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.286830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.286837] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.299042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.299377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.299393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.299401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.299578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.299752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.299761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.299768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.299779] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.312150] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.312554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.312571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.312579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.312752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.312924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.312932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.312939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.312946] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.325162] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.806 [2024-12-13 06:42:01.325545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.806 [2024-12-13 06:42:01.325563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.806 [2024-12-13 06:42:01.325571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.806 [2024-12-13 06:42:01.325744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.806 [2024-12-13 06:42:01.325916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.806 [2024-12-13 06:42:01.325925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.806 [2024-12-13 06:42:01.325931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.806 [2024-12-13 06:42:01.325939] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.806 [2024-12-13 06:42:01.331031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:09.807 [2024-12-13 06:42:01.338138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.338509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.338535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.338709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.338882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.338891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.338898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.338905] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.351119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.351538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.351555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.351563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.351736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.351913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.351922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.351929] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.351935] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.353039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:09.807 [2024-12-13 06:42:01.353066] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:09.807 [2024-12-13 06:42:01.353073] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:09.807 [2024-12-13 06:42:01.353080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:09.807 [2024-12-13 06:42:01.353085] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:09.807 [2024-12-13 06:42:01.354231] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:09.807 [2024-12-13 06:42:01.354336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.807 [2024-12-13 06:42:01.354337] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:09.807 [2024-12-13 06:42:01.364176] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.364643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.364663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.364672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.364846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.365021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.365029] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.365036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.365044] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.377279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.377595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.377616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.377625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.377799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.377979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.377989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.377998] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.378007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.390401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.390844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.390865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.390874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.391046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.391221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.391229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.391236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.391245] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.403491] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.403810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.403830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.403840] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.404014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.404393] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.404404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.404413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.404421] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.416548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.417009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.417029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.417039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.417213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.417389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.417398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.417406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.417419] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.429649] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 [2024-12-13 06:42:01.429928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.429945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.429954] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 [2024-12-13 06:42:01.430126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.430300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.430309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.430315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.430323] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 [2024-12-13 06:42:01.442718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.807 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.807 [2024-12-13 06:42:01.443147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.807 [2024-12-13 06:42:01.443164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.807 [2024-12-13 06:42:01.443171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.807 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:09.807 [2024-12-13 06:42:01.443344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.807 [2024-12-13 06:42:01.443524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.807 [2024-12-13 06:42:01.443533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.807 [2024-12-13 06:42:01.443540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.807 [2024-12-13 06:42:01.443547] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:09.807 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:09.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:09.808 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:09.808 [2024-12-13 06:42:01.455752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:09.808 [2024-12-13 06:42:01.456086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:09.808 [2024-12-13 06:42:01.456104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:09.808 [2024-12-13 06:42:01.456111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:09.808 [2024-12-13 06:42:01.456284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:09.808 [2024-12-13 06:42:01.456461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:09.808 [2024-12-13 06:42:01.456470] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:09.808 [2024-12-13 06:42:01.456483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:09.808 [2024-12-13 06:42:01.456490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 [2024-12-13 06:42:01.468860] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.067 [2024-12-13 06:42:01.469197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.067 [2024-12-13 06:42:01.469213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.067 [2024-12-13 06:42:01.469221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.067 [2024-12-13 06:42:01.469395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.067 [2024-12-13 06:42:01.469573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.067 [2024-12-13 06:42:01.469583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.067 [2024-12-13 06:42:01.469589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.067 [2024-12-13 06:42:01.469596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.067 [2024-12-13 06:42:01.481965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.067 [2024-12-13 06:42:01.482258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.067 [2024-12-13 06:42:01.482274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.067 [2024-12-13 06:42:01.482282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.067 [2024-12-13 06:42:01.482458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.067 [2024-12-13 06:42:01.482632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.067 [2024-12-13 06:42:01.482641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.067 [2024-12-13 06:42:01.482648] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.067 [2024-12-13 06:42:01.482654] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 [2024-12-13 06:42:01.485078] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.067 [2024-12-13 06:42:01.495017] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.067 [2024-12-13 06:42:01.495308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.067 [2024-12-13 06:42:01.495329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.067 [2024-12-13 06:42:01.495337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.067 [2024-12-13 06:42:01.495520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.067 [2024-12-13 06:42:01.495695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.067 [2024-12-13 06:42:01.495703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.067 [2024-12-13 06:42:01.495710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.067 [2024-12-13 06:42:01.495717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 5174.67 IOPS, 20.21 MiB/s [2024-12-13T05:42:01.721Z] [2024-12-13 06:42:01.508086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.067 [2024-12-13 06:42:01.508377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.067 [2024-12-13 06:42:01.508393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.067 [2024-12-13 06:42:01.508401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.067 [2024-12-13 06:42:01.508578] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.067 [2024-12-13 06:42:01.508751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.067 [2024-12-13 06:42:01.508759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.067 [2024-12-13 06:42:01.508766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.067 [2024-12-13 06:42:01.508773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 [2024-12-13 06:42:01.521148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.067 [2024-12-13 06:42:01.521581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.067 [2024-12-13 06:42:01.521598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.067 [2024-12-13 06:42:01.521606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.067 [2024-12-13 06:42:01.521779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.067 [2024-12-13 06:42:01.521952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.067 [2024-12-13 06:42:01.521960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.067 [2024-12-13 06:42:01.521966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.067 [2024-12-13 06:42:01.521973] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.067 Malloc0 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.067 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.067 [2024-12-13 06:42:01.534174] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.068 [2024-12-13 06:42:01.534465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.068 [2024-12-13 06:42:01.534482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.068 [2024-12-13 06:42:01.534489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.068 [2024-12-13 06:42:01.534661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.068 [2024-12-13 06:42:01.534834] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.068 [2024-12-13 06:42:01.534843] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.068 [2024-12-13 06:42:01.534849] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.068 [2024-12-13 06:42:01.534856] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:10.068 [2024-12-13 06:42:01.547235] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.068 [2024-12-13 06:42:01.547511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:10.068 [2024-12-13 06:42:01.547529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x134f490 with addr=10.0.0.2, port=4420 00:36:10.068 [2024-12-13 06:42:01.547537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x134f490 is same with the state(6) to be set 00:36:10.068 [2024-12-13 06:42:01.547710] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x134f490 (9): Bad file descriptor 00:36:10.068 [2024-12-13 06:42:01.547882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:10.068 [2024-12-13 06:42:01.547891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:10.068 [2024-12-13 06:42:01.547897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:10.068 [2024-12-13 06:42:01.547904] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:10.068 [2024-12-13 06:42:01.548453] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.068 06:42:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1202090 00:36:10.068 [2024-12-13 06:42:01.560266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:10.068 [2024-12-13 06:42:01.585155] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:11.940 5955.29 IOPS, 23.26 MiB/s [2024-12-13T05:42:04.529Z] 6636.12 IOPS, 25.92 MiB/s [2024-12-13T05:42:05.906Z] 7199.67 IOPS, 28.12 MiB/s [2024-12-13T05:42:06.842Z] 7630.00 IOPS, 29.80 MiB/s [2024-12-13T05:42:07.778Z] 8015.00 IOPS, 31.31 MiB/s [2024-12-13T05:42:08.715Z] 8325.25 IOPS, 32.52 MiB/s [2024-12-13T05:42:09.651Z] 8550.08 IOPS, 33.40 MiB/s [2024-12-13T05:42:10.587Z] 8766.36 IOPS, 34.24 MiB/s [2024-12-13T05:42:10.587Z] 8959.13 IOPS, 35.00 MiB/s 00:36:18.933 Latency(us) 00:36:18.933 [2024-12-13T05:42:10.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:18.933 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:18.933 Verification LBA range: start 0x0 length 0x4000 00:36:18.933 Nvme1n1 : 15.01 8960.49 35.00 10755.18 0.00 6472.47 442.76 16103.13 00:36:18.933 [2024-12-13T05:42:10.587Z] =================================================================================================================== 00:36:18.933 [2024-12-13T05:42:10.587Z] Total : 8960.49 35.00 10755.18 0.00 6472.47 442.76 16103.13 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:19.192 rmmod nvme_tcp 00:36:19.192 rmmod nvme_fabrics 00:36:19.192 rmmod nvme_keyring 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 1203238 ']' 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 1203238 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1203238 ']' 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1203238 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1203238 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1203238' 00:36:19.192 killing process with pid 1203238 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1203238 00:36:19.192 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1203238 00:36:19.451 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:19.451 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:19.451 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:19.451 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:19.452 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:19.452 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:19.452 06:42:10 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:19.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:19.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:19.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:19.452 06:42:11 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:21.987 00:36:21.987 real 0m25.889s 00:36:21.987 user 1m0.485s 00:36:21.987 sys 0m6.613s 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.987 ************************************ 00:36:21.987 END TEST nvmf_bdevperf 00:36:21.987 ************************************ 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:21.987 ************************************ 00:36:21.987 START TEST nvmf_target_disconnect 00:36:21.987 ************************************ 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:21.987 * Looking for test storage... 00:36:21.987 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.987 --rc genhtml_branch_coverage=1 00:36:21.987 --rc genhtml_function_coverage=1 00:36:21.987 --rc genhtml_legend=1 00:36:21.987 --rc geninfo_all_blocks=1 00:36:21.987 --rc geninfo_unexecuted_blocks=1 00:36:21.987 00:36:21.987 ' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.987 --rc genhtml_branch_coverage=1 00:36:21.987 --rc genhtml_function_coverage=1 00:36:21.987 --rc genhtml_legend=1 00:36:21.987 --rc geninfo_all_blocks=1 00:36:21.987 --rc geninfo_unexecuted_blocks=1 00:36:21.987 00:36:21.987 ' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.987 --rc genhtml_branch_coverage=1 00:36:21.987 --rc genhtml_function_coverage=1 00:36:21.987 --rc genhtml_legend=1 00:36:21.987 --rc geninfo_all_blocks=1 00:36:21.987 --rc geninfo_unexecuted_blocks=1 00:36:21.987 00:36:21.987 ' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:21.987 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:21.987 --rc genhtml_branch_coverage=1 00:36:21.987 --rc genhtml_function_coverage=1 00:36:21.987 --rc genhtml_legend=1 00:36:21.987 --rc geninfo_all_blocks=1 00:36:21.987 --rc geninfo_unexecuted_blocks=1 00:36:21.987 00:36:21.987 ' 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:21.987 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:21.988 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:21.988 06:42:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:27.261 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:27.521 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:27.521 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:27.521 Found net devices under 0000:af:00.0: cvl_0_0 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:27.521 Found net devices under 0000:af:00.1: cvl_0_1 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:27.521 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:27.522 06:42:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:27.522 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:27.522 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.358 ms 00:36:27.522 00:36:27.522 --- 10.0.0.2 ping statistics --- 00:36:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.522 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:27.522 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:27.522 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:36:27.522 00:36:27.522 --- 10.0.0.1 ping statistics --- 00:36:27.522 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:27.522 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:27.522 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:27.781 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:27.781 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.781 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.781 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.781 ************************************ 00:36:27.781 START TEST nvmf_target_disconnect_tc1 00:36:27.781 ************************************ 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.782 [2024-12-13 06:42:19.347912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.782 [2024-12-13 06:42:19.347955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c04c50 with addr=10.0.0.2, port=4420 00:36:27.782 [2024-12-13 06:42:19.347978] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:27.782 [2024-12-13 06:42:19.347986] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:27.782 [2024-12-13 06:42:19.347992] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:27.782 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:27.782 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:27.782 Initializing NVMe Controllers 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:27.782 00:36:27.782 real 0m0.116s 00:36:27.782 user 0m0.049s 00:36:27.782 sys 0m0.067s 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:27.782 ************************************ 00:36:27.782 END TEST nvmf_target_disconnect_tc1 00:36:27.782 ************************************ 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:27.782 ************************************ 00:36:27.782 START TEST nvmf_target_disconnect_tc2 00:36:27.782 ************************************ 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:27.782 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1208578 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1208578 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1208578 ']' 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:28.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.041 [2024-12-13 06:42:19.489316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:28.041 [2024-12-13 06:42:19.489359] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:28.041 [2024-12-13 06:42:19.571803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:28.041 [2024-12-13 06:42:19.594562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:28.041 [2024-12-13 06:42:19.594601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:28.041 [2024-12-13 06:42:19.594609] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:28.041 [2024-12-13 06:42:19.594615] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:28.041 [2024-12-13 06:42:19.594620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:28.041 [2024-12-13 06:42:19.595982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:28.041 [2024-12-13 06:42:19.596093] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:28.041 [2024-12-13 06:42:19.596201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:28.041 [2024-12-13 06:42:19.596202] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:28.041 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.300 Malloc0 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.300 [2024-12-13 06:42:19.774071] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.300 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.301 [2024-12-13 06:42:19.803298] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1208764 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:28.301 06:42:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:30.207 06:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1208578 00:36:30.207 06:42:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 [2024-12-13 06:42:21.841904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Write completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 [2024-12-13 06:42:21.842110] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.207 Read completed with error (sct=0, sc=8) 00:36:30.207 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 [2024-12-13 06:42:21.842300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:30.208 [2024-12-13 06:42:21.842403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.842427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.842623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.842639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.842728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.842738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.842908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.842918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.843123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.843134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Read completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 Write completed with error (sct=0, sc=8) 00:36:30.208 starting I/O failed 00:36:30.208 [2024-12-13 06:42:21.843327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:30.208 [2024-12-13 06:42:21.843529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.843552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.843719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.843730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.843951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.843963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.844926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.208 [2024-12-13 06:42:21.844935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.208 qpair failed and we were unable to recover it. 00:36:30.208 [2024-12-13 06:42:21.845011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.845924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.845933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.846987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.846998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.847238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.847270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.847391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.847427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.847616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.847648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.847852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.847892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.847976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.847986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.848953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.848963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.849033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.849042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.849229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.849239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.849313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.849322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.849394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.209 [2024-12-13 06:42:21.849404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.209 qpair failed and we were unable to recover it. 00:36:30.209 [2024-12-13 06:42:21.849473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.849483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.849623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.849632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.849690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.849699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.849892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.849902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.849964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.849973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.850981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.851954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.851966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.852925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.852937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.853016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.853028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.853089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.853102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.853168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.210 [2024-12-13 06:42:21.853180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.210 qpair failed and we were unable to recover it. 00:36:30.210 [2024-12-13 06:42:21.853262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.853897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.853909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.854968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.854980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.855926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.855938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.211 qpair failed and we were unable to recover it. 00:36:30.211 [2024-12-13 06:42:21.856710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.211 [2024-12-13 06:42:21.856722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.856799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.856811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.856885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.856899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.857802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.857973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.858004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.858234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.858247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.858459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.858473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.858692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.858705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.212 [2024-12-13 06:42:21.858854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.212 [2024-12-13 06:42:21.858867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.212 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.859929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.859942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.860130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.860143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.860239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.860252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.860409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.860421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.860670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.860702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.860889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.860921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.861133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.861163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.861362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.861392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.861627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.861647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.861819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.861849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.862140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.862171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.862356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.862393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.862652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.862684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.862969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.863000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.863239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.863259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.863498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.863519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.863703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.863723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.863906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.863937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.864145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.864175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.864411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.864442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.487 [2024-12-13 06:42:21.864712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.487 [2024-12-13 06:42:21.864744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.487 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.864879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.864898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.865099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.865129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.865300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.865331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.865537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.865570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.865761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.865792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.866005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.866036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.866267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.866286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.866525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.866546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.866702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.866721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.866838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.866858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.867023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.867042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.867224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.867255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.867556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.867590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.867770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.867800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.867934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.867966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.868179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.868209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.868333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.868352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.868608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.868628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.868883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.868902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.869105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.869125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.869268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.869287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.869505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.869526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.869673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.869692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.869851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.869871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.870084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.870104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.870204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.870223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.870470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.870491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.870651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.870670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.870890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.870909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.871091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.871111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.871379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.871416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.871720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.871749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.871997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.872023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.872199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.872225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.872471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.872498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.872669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.872695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.872893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.872924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.873118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.488 [2024-12-13 06:42:21.873149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.488 qpair failed and we were unable to recover it. 00:36:30.488 [2024-12-13 06:42:21.873339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.873370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.873566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.873599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.873775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.873806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.873997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.874028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.874148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.874180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.874361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.874387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.874592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.874624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.874902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.874934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.875169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.875200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.875492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.875525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.875791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.875823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.876082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.876113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.876246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.876272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.876524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.876552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.876811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.876836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.877089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.877115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.877289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.877315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.877543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.877576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.877784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.877815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.878078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.878104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.878274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.878300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.878501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.878534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.878788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.878818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.879021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.879052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.879285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.879311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.879604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.879632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.879746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.879772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.879931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.879956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.880136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.880162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.880267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.880293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.880466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.880493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.880736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.880768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.881005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.881042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.881281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.881321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.881569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.881596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.881804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.881836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.882100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.882131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.882370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.882693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.489 [2024-12-13 06:42:21.882725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.489 qpair failed and we were unable to recover it. 00:36:30.489 [2024-12-13 06:42:21.882940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.882971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.883144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.883174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.883439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.883489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.883673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.883704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.883942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.883974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.884166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.884197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.884384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.884414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.884698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.884731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.884912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.884943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.885128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.885160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.885347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.885378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.885550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.885583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.885708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.885739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.885921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.885952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.886213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.886245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.886526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.886558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.886837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.886868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.887049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.887081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.887318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.887349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.887579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.887612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.887801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.887832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.888072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.888103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.888314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.888611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.888644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.888884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.888915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.889171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.889201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.889469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.889502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.889786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.889817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.890092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.890123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.890407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.890439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.890656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.890687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.890877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.890907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.891089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.891120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.891380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.891416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.891705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.891739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.891927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.891959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.892238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.892270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.892400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.892432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.892683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.490 [2024-12-13 06:42:21.892715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.490 qpair failed and we were unable to recover it. 00:36:30.490 [2024-12-13 06:42:21.892886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.892917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.893043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.893074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.893285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.893315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.893602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.893635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.893906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.893937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.894147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.894178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.894351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.894381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.894566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.894598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.894879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.894910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.895158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.895189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.895371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.895402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.895672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.895704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.895993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.896023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.896295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.896326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.896616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.896648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.896895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.896926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.897183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.897215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.897349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.897380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.897641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.897673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.897865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.897896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.898159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.898376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.898407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.898614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.898647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.898905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.898936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.899105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.899135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.899319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.899350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.899528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.899561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.899815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.899846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.900110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.900142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.900324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.900354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.900621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.900654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.900893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.900925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.491 qpair failed and we were unable to recover it. 00:36:30.491 [2024-12-13 06:42:21.901160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.491 [2024-12-13 06:42:21.901191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.901321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.901352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.901612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.901650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.901835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.901867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.902078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.902108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.902365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.902395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.902535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.902567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.902833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.902863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.903053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.903084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.903282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.903313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.903484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.903517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.903724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.903755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.903998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.904028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.904239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.904272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.904463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.904494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.904598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.904630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.904909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.904941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.905150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.905180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.905428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.905470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.905708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.905740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.906027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.906058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.906230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.906262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.906374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.906405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.906627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.906659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.906855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.906887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.907125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.907157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.907360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.907391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.907667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.907700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.907902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.907933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.908064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.908096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.908221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.908252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.908430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.908473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.908710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.908741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.908979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.909011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.909221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.909252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.909440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.909484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.909754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.909785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.910032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.910063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.910333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.910364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.492 qpair failed and we were unable to recover it. 00:36:30.492 [2024-12-13 06:42:21.910567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.492 [2024-12-13 06:42:21.910600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.910895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.910925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.911067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.911098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.911280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.911316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.911580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.911612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.911817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.911848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.912086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.912117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.912296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.912328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.912616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.912649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.912887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.912918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.913108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.913139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.913328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.913359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.913625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.913657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.913868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.913898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.914169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.914200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.914494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.914792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.914823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.915128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.915160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.915332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.915363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.915555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.915588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.915842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.915873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.916099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.916130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.916390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.916422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.916562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.916594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.916862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.916894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.917120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.917151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.917441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.917481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.917627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.917658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.917833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.917865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.918127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.918157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.918371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.918403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.918609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.918642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.918853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.918884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.919024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.919055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.919273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.919303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.919515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.919548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.919668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.919699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.919867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.919898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.920167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.493 [2024-12-13 06:42:21.920198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.493 qpair failed and we were unable to recover it. 00:36:30.493 [2024-12-13 06:42:21.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.920445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.920659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.920690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.920958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.920989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.921163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.921194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.921367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.921404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.921697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.921729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.921922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.921953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.922168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.922199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.922468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.922500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.922645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.922677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.922865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.922895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.923158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.923190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.923309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.923340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.923600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.923633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.923896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.923928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.924175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.924206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.924395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.924425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.924730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.924762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.924999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.925030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.925288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.925319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.925621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.925654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.925914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.925945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.926235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.926265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.926406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.926437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.926631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.926663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.926910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.926940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.927193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.927225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.927341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.927373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.927545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.927577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.927821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.927853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.928099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.928130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.928325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.928356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.928549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.928582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.928832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.928863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.928996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.929027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.929226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.929257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.929567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.929600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.929785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.929815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.929944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.494 [2024-12-13 06:42:21.929974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.494 qpair failed and we were unable to recover it. 00:36:30.494 [2024-12-13 06:42:21.930244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.930275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.930468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.930500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.930762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.930793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.930925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.930956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.931245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.931276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.931490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.931529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.931794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.931826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.932096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.932127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.932318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.932349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.932620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.932653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.932846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.932876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.933128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.933159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.933400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.933430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.933705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.933737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.934025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.934056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.934332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.934491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.934524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.934792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.934823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.935040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.935278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.935431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.935625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.935788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.935981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.936012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.936236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.936267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.936464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.936496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.936684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.936715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.936909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.936940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.937214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.937245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.937524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.937557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.937864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.937894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.938148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.938183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.938479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.938517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.938779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.938811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.939098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.939129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.939264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.939295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.939494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.939720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.939751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.495 [2024-12-13 06:42:21.939927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.495 [2024-12-13 06:42:21.939958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.495 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.940170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.940201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.940473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.940505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.940695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.940727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.940857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.940888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.941093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.941124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.941335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.941366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.941561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.941594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.941868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.941900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.942162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.942193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.942388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.942419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.942603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.942636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.942881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.942913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.943047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.943078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.943272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.943304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.943547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.943580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.943757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.943788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.944031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.944062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.944257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.944288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.944446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.944488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.944776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.944808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.945016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.945047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.945177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.945209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.945405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.945436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.945635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.945667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.945937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.945968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.946210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.946241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.946494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.946528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.946662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.946693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.946867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.946899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.947168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.947200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.947387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.947419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.947630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.947663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.947838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.496 [2024-12-13 06:42:21.947869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.496 qpair failed and we were unable to recover it. 00:36:30.496 [2024-12-13 06:42:21.948149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.948186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.948464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.948497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.948711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.948743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.948878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.948910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.949113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.949143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.949411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.949443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.949626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.949658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.949923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.949954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.950243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.950274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.950424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.950465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.950736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.950768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.951031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.951062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.951273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.951304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.951551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.951584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.951847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.951879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.952090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.952122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.952385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.952416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.952716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.952749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.952943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.952975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.953236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.953266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.953472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.953505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.953707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.953739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.954043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.954075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.954332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.954363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.954564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.954597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.954843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.954875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.955121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.955152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.955459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.955493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.955706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.955738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.956004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.956035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.956312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.956343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.956616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.956651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.956863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.956895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.957039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.957070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.957248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.957280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.957528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.957562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.957750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.957780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.957991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.958024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.958131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.497 [2024-12-13 06:42:21.958162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.497 qpair failed and we were unable to recover it. 00:36:30.497 [2024-12-13 06:42:21.958433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.958476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.958664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.958702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.958898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.958930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.959220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.959251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.959499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.959532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.959803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.959838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.959974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.960006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.960257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.960290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.960472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.960505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.960750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.960782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.960971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.961003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.961292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.961324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.961545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.961580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.961783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.961816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.962050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.962082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.962287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.962319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.962598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.962630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.962904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.962936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.963167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.963198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.963383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.963415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.963578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.963611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.963820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.963851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.964043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.964074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.964274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.964326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.964525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.964558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.964705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.964736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.964988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.965020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.965285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.965317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.965521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.965554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.965806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.965839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.965982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.966013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.966283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.966315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.966570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.966603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.966800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.966831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.967076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.967108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.967376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.967408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.967653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.967685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.967831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.967861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.968159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.498 [2024-12-13 06:42:21.968192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.498 qpair failed and we were unable to recover it. 00:36:30.498 [2024-12-13 06:42:21.968465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.968500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.968747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.968781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.969050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.969087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.969288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.969320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.969592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.969626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.969917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.969950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.970251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.970558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.970591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.970846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.970878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.971205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.971237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.971376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.971407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.971701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.971733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.972003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.972036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.972315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.972346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.972550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.972603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.972740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.972772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.973031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.973064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.973256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.973288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.973486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.973521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.973703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.973737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.973985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.974021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.974260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.974295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.974535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.974570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.974772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.974806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.975083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.975116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.975312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.975343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.975580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.975614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.975809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.975840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.976041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.976073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.976284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.976317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.976517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.976550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.976807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.976840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.976969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.977000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.977271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.977303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.977503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.977536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.977736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.977767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.977990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.978022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.978223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.978256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.978470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.499 [2024-12-13 06:42:21.978502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.499 qpair failed and we were unable to recover it. 00:36:30.499 [2024-12-13 06:42:21.978629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.978661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.978915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.978947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.979148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.979180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.979377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.979414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.979663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.979696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.979897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.979929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.980133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.980165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.980441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.980500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.980763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.980794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.981126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.981160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.981350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.981382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.981636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.981669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.981882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.981914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.982050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.982080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.982354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.982386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.982679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.982714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.982920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.982951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.983237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.983269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.983561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.983598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.983740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.983772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.983995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.984027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.984314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.984346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.984618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.984753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.984784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.984989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.985020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.985215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.985247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.985471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.985504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.985755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.985788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.986007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.986038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.986313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.986345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.986649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.986683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.986893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.986925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.987113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.987145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.987420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.987460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.500 [2024-12-13 06:42:21.987606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.500 [2024-12-13 06:42:21.987639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.500 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.987793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.987825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.988029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.988060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.988356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.988388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.988716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.988750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.988978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.989010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.989260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.989292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.989588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.989723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.989754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.989970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.990008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.990200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.990231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.990431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.990475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.990730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.990762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.991037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.991069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.991260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.991290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.991586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.991620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.991925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.991956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.992080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.992112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.992390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.992421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.992622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.992654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.992907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.992938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.993192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.993224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.993504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.993537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.993850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.993881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.994023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.994055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.994331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.994362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.994573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.994606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.994729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.994761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.994908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.994940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.995206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.995238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.995498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.995531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.995736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.995767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.996034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.996065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.996361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.996393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.996664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.996696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.996921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.996953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.997225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.997257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.997514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.997548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.997662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.997693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.501 [2024-12-13 06:42:21.997993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.501 [2024-12-13 06:42:21.998025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.501 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.998295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.998328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.998622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.998656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.998798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.998829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.999011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.999043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.999294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.999326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.999467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.999499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.999722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.999754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:21.999939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:21.999971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.000187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.000219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.000504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.000544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.000688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.000719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.000922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.000953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.001269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.001302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.001505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.001538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.001789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.001821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.001968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.002000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.002192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.002223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.002348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.002380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.002585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.002618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.002837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.002867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.003151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.003183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.003464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.003497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.003690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.003722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.003921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.003953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.004147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.004180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.004432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.004491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.004773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.004805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.005088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.005119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.005398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.005429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.005718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.005751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.005907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.005938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.006153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.006185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.006472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.006505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.006783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.006814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.006943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.006974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.007280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.007312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.007461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.007494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.007674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.007706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.502 qpair failed and we were unable to recover it. 00:36:30.502 [2024-12-13 06:42:22.007859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.502 [2024-12-13 06:42:22.007890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.008105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.008137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.008329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.008360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.008637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.008671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.008815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.008846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.009050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.009082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.009302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.009333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.009538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.009571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.009843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.009874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.010091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.010122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.010302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.010333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.010614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.010654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.010932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.010963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.011159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.011191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.011302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.011333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.011608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.011640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.011890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.011921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.012240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.012272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.012503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.012537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.012833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.012864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.013068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.013100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.013382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.013412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.013702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.013735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.013958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.013990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.014242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.014274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.014493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.014527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.014751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.014783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.015034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.015066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.015338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.015370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.015583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.015616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.015851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.015882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.016085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.016116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.016324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.016356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.016631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.016665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.016789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.016821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.017089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.017121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.017395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.017426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.017718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.017749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.018023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.503 [2024-12-13 06:42:22.018056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.503 qpair failed and we were unable to recover it. 00:36:30.503 [2024-12-13 06:42:22.018165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.018196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.018488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.018522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.018718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.018749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.019049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.019080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.019282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.019314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.019532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.019565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.019845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.019877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.020144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.020177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.020318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.020350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.020545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.020578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.020772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.020803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.021055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.021087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.021228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.021264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.021541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.021574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.021839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.022077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.022109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.022357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.022388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.022700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.022733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.023018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.023050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.023301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.023333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.023641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.023674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.023913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.023944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.024221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.024252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.024466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.024500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.024703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.024735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.025010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.025041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.025238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.025544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.025578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.025855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.025886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.026092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.026123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.026370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.026401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.026712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.026745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.027020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.027051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.027301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.027332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.027531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.027564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.027844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.027875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.028055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.028087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.028288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.028320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.504 [2024-12-13 06:42:22.028514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.504 [2024-12-13 06:42:22.028547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.504 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.028805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.028837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.029109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.029141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.029327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.029357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.029574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.029608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.029881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.029912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.030183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.030215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.030513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.030547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.030832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.030863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.031146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.031177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.031392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.031424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.031627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.031658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.031917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.031949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.032078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.032110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.032382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.032419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.032711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.032944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.032976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.033186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.033217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.033471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.033504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.033761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.033793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.034093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.034124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.034274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.034305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.034588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.034621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.034892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.034923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.035111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.035142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.035326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.035356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.035568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.035602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.035854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.035886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.036144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.036176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.036472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.036506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.036691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.036723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.037016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.037048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.037328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.037359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.037577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.037610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.037891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.505 [2024-12-13 06:42:22.037922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.505 qpair failed and we were unable to recover it. 00:36:30.505 [2024-12-13 06:42:22.038120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.038151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.038413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.038444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.038667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.038698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.038927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.038958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.039235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.039266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.039446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.039491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.039713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.039746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.040004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.040035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.040313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.040344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.040564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.040598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.040873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.040904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.041171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.041203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.041505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.041539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.041672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.041704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.041976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.042007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.042305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.042337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.042605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.042638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.042920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.042951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.043239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.043271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.043547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.043586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.043894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.043925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.044202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.044235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.044501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.044534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.044836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.044867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.045134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.045166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.045467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.045500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.045727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.045759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.046032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.046064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.046329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.046361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.046610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.046643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.046834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.046866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.047144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.047176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.047475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.047508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.047776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.047808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.048003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.048035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.048213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.048244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.048442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.048501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.048704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.506 [2024-12-13 06:42:22.048736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.506 qpair failed and we were unable to recover it. 00:36:30.506 [2024-12-13 06:42:22.049010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.049040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.049319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.049347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.049574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.049606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.049857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.049886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.050145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.050173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.050365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.050393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.050594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.050625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.050894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.050923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.051161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.051191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.051393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.051423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.051703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.051733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.052044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.052072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.052355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.052386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.052590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.052622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.052899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.052932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.053206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.053531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.053564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.053803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.053833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.054096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.054144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.054365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.054396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.054605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.054644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.054928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.054969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.055169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.055202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.055414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.055464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.055665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.055700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.055907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.055938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.056073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.056106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.056324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.056356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.056673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.056708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.056952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.056986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.057186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.057218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.057491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.057525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.057807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.057840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.058122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.058156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.058408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.058440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.058738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.058771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.058972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.059005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.059197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.059231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.059513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.507 [2024-12-13 06:42:22.059546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.507 qpair failed and we were unable to recover it. 00:36:30.507 [2024-12-13 06:42:22.059752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.059784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.060040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.060072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.060254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.060286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.060539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.060574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.060831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.060864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.061020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.061052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.061241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.061274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.061472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.061506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.061698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.061731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.061952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12d6c70 is same with the state(6) to be set 00:36:30.508 [2024-12-13 06:42:22.062329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.062405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.062709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.062746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.062962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.062995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.063251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.063284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.063487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.063522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.063782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.063814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.064089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.064123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.064321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.064354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.064613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.064646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.064844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.064876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.065080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.065113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.065241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.065274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.065429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.065473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.065713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.065748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.066054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.066087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.066225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.066257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.066529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.066563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.066816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.066849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.067119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.067151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.067433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.067477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.067752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.067786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.067986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.068019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.068211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.068243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.068521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.068556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.068680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.068712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.068916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.068949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.069166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.069205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.069391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.069423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.069709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.069741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.508 [2024-12-13 06:42:22.069890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.508 [2024-12-13 06:42:22.069922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.508 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.070135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.070167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.070363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.070396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.070530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.070563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.070768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.070800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.071057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.071089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.071368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.071401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.071709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.071742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.071924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.071956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.072231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.072264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.072541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.072575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.072862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.072895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.073146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.073415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.073447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.073667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.073701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.073937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.073971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.074181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.074214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.074494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.074528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.074718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.074750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.074938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.074972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.075184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.075217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.075478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.075511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.075639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.075671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.075872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.075905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.076112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.076145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.076330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.076362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.076643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.076677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.076873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.076905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.077090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.077124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.077347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.077380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.077568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.077601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.077793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.077826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.077957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.077990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.078313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.078345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.078543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.509 [2024-12-13 06:42:22.078577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.509 qpair failed and we were unable to recover it. 00:36:30.509 [2024-12-13 06:42:22.078784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.078817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.079092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.079124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.079355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.079393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.079597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.079630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.079840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.079871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.080141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.080174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.080427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.080468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.080725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.080757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.081039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.081071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.081327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.081361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.081558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.081592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.081730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.081762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.082038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.082071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.082275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.082307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.082587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.082623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.082806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.082838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.083054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.083087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.083341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.083376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.083571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.083604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.083831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.083867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.084054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.084086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.084272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.084304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.084594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.084629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.084825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.084858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.085112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.085147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.085407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.085439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.085728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.085760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.085975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.086009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.086196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.086227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.086511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.086547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.086796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.086828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.087110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.087142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.087443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.087487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.087759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.087791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.087984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.088017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.088161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.088194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.088422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.088464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.088647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.088679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.088936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.510 [2024-12-13 06:42:22.088969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.510 qpair failed and we were unable to recover it. 00:36:30.510 [2024-12-13 06:42:22.089168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.089200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.089339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.089371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.089505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.089540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.089721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.089761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.089955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.089988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.090213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.090246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.090475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.090508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.090657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.090690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.090914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.090947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.091130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.091162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.091467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.091500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.091708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.091742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.091942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.091974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.092155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.092187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.092389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.092422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.092647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.092680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.092929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.092962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.093170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.093204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.093474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.093681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.093713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.093919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.093952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.094133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.094165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.094360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.094392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.094664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.094865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.094898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.095192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.095224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.095499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.095534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.095787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.095819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.096025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.096057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.096331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.096363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.096577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.096611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.096845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.097014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.097045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.097175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.097208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.097481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.097514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.097828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.097862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.098055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.098088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.098342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.098375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.098558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.511 [2024-12-13 06:42:22.098592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.511 qpair failed and we were unable to recover it. 00:36:30.511 [2024-12-13 06:42:22.098892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.098925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.099181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.099214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.099475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.099508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.099725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.099757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.100027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.100066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.100277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.100309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.100576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.100610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.100780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.100813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.101091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.101122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.101328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.101361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.101496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.101530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.101739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.101771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.102069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.102101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.102289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.102322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.102588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.102622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.102826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.102858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.103116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.103149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.103363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.103395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.103562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.103595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.103878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.103910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.104206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.104239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.104516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.104548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.104750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.104783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.105102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.105135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.105426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.105676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.105710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.105896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.105928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.106123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.106155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.106436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.106480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.106748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.106780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.107069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.107101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.107331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.107364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.107631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.107667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.107930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.107962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.108260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.108293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.108428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.108473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.108654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.108940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.108975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.109192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.512 [2024-12-13 06:42:22.109225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.512 qpair failed and we were unable to recover it. 00:36:30.512 [2024-12-13 06:42:22.109500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.109535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.109737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.109770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.110067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.110101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.110401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.110690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.110723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.110973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.111012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.111273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.111305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.111571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.111605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.111811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.111844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.112024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.112057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.112252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.112284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.112398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.112430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.112713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.112747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.112960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.112994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.113209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.113426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.113485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.113782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.113814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.114001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.114033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.114287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.114321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.114652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.114688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.114892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.114925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.115122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.115154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.115428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.115472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.115603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.115636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.115834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.115867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.116062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.116093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.116345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.116377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.116607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.116641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.116867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.116900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.117097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.117129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.117315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.117348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.117547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.117581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.117895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.117971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.118190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.118228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.118471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.118507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.118767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.118801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.119066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.119100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.119359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.119391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.513 [2024-12-13 06:42:22.119655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.513 [2024-12-13 06:42:22.119690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.513 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.119997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.120030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.120161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.120196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.120459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.120493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.120756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.120790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.121041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.121074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.121374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.121407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.121569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.121613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.121879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.121911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.122099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.122134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.122317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.122349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.122625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.122658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.122929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.122962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.123257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.123289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.123560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.123593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.123806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.123838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.124034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.124066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.124340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.124372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.124655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.124689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.124910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.124942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.125168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.125202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.125483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.125517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.125731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.125763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.514 [2024-12-13 06:42:22.125895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.514 [2024-12-13 06:42:22.125926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.514 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.126206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.126238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.126432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.126481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.126775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.126807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.127065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.127097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.127397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.127429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.127672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.127706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.127950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.127982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.128202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.128236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.128444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.128490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.128790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.128823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.129085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.129118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.129414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.791 [2024-12-13 06:42:22.129446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.791 qpair failed and we were unable to recover it. 00:36:30.791 [2024-12-13 06:42:22.129737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.129769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.129974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.130006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.130284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.130317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.130527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.130563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.130819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.130853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.131046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.131078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.131363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.131394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.131531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.131564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.131834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.131868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.132051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.132082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.132357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.132393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.132548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.132586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.132867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.132899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.133170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.133202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.133312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.133345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.133626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.133661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.133791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.133823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.134014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.134047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.134296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.134328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.134607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.134641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.134924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.134957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.135152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.135184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.135407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.135439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.135726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.135759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.135967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.135999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.136206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.136238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.136471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.136504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.136757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.136790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.137045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.137078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.137348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.137381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.137581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.137615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.137894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.137926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.138125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.138157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.138352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.138384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.138610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.138643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.138848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.138880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.139088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.139122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.139318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.139350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.792 [2024-12-13 06:42:22.139610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.792 [2024-12-13 06:42:22.139645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.792 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.139944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.139977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.140178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.140216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.140499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.140532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.140802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.140835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.140993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.141025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.141208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.141239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.141493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.141528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.141725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.141757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.142057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.142095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.142379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.142411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.142683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.142715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.142923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.142954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.143135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.143168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.143426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.143473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.143745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.143777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.144084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.144117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.144326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.144360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.144662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.144697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.144974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.145007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.145298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.145335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.145554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.145598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.145847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.145879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.146074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.146107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.146248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.146281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.146485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.146522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.146652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.146972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.147006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.147281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.147317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.147602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.147637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.147827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.148136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.148173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.148475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.148511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.148701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.148732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.148878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.148912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.149186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.149219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.149480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.149520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.149710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.149742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.150013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.150047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.793 qpair failed and we were unable to recover it. 00:36:30.793 [2024-12-13 06:42:22.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.793 [2024-12-13 06:42:22.150298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.150494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.150544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.150841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.150877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.151036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.151069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.151325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.151359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.151579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.151617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.151945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.152223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.152257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.152477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.152512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.152736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.152768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.152988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.153022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.153245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.153275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.153528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.153562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.153822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.153855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.154109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.154142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.154285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.154317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.154504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.154539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.154741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.154773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.154995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.155028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.155284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.155320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.155542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.155575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.155848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.155881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.156108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.156141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.156404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.156437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.156707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.156744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.156945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.156979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.157283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.157563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.157597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.157742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.157777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.157979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.158013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.158213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.158245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.158474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.158508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.158697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.158730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.159003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.159036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.159235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.159268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.159540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.159574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.159774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.160106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.160140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.160283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.794 [2024-12-13 06:42:22.160315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.794 qpair failed and we were unable to recover it. 00:36:30.794 [2024-12-13 06:42:22.160587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.160620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.160877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.160911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.161157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.161195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.161466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.161500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.161806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.161839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.162055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.162087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.162299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.162332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.162549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.162585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.162774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.162806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.163005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.163037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.163261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.163296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.163575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.163610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.163888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.163920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.164187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.164474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.164508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.164720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.164753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.165040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.165074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.165298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.165332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.165527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.165562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.165749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.165781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.165976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.166009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.166289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.166323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.166607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.166642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.166923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.166955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.167096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.167130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.167321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.167354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.167532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.167743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.167776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.168032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.168065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.168254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.168286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.168471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.168505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.168703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.168937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.168969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.169097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.169129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.169325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.169358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.169588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.169623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.169924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.169956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.170151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.170185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.170435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.795 [2024-12-13 06:42:22.170480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.795 qpair failed and we were unable to recover it. 00:36:30.795 [2024-12-13 06:42:22.170610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.170642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.170849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.170884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.171178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.171210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.171484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.171523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.171786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.171820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.172004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.172037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.172320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.172353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.172659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.172696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.172969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.173002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.173291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.173323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.173444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.173490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.173767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.173800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.174016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.174049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.174250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.174282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.174529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.174563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.174841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.174873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.175154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.175186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.175398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.175430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.175667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.175701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.175975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.176008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.176284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.176317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.176553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.176749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.176783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.177038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.177071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.177274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.177306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.177503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.177537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.177740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.177773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.177982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.178204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.178347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.178527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.178684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.796 qpair failed and we were unable to recover it. 00:36:30.796 [2024-12-13 06:42:22.178923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.796 [2024-12-13 06:42:22.178956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.179142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.179174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.179395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.179428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.179643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.179678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.179824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.179856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.179982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.180014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.180284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.180319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.180472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.180523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.180720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.180753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.180933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.180967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.181229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.181263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.181555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.181594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.181870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.181903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.182039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.182073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.182199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.182231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.182431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.182474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.182723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.182755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.182956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.182988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.183176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.183208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.183339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.183372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.183591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.183625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.183835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.183867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.183981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.184013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.184206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.184238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.184365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.184397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.184595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.184628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.184829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.184861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.185070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.185102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.185287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.185319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.185502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.185535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.185732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.185764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.186041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.186072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.186272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.186304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.186585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.186619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.186909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.186941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.187117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.187149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.187354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.187387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.187660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.187694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.187843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.187875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.797 qpair failed and we were unable to recover it. 00:36:30.797 [2024-12-13 06:42:22.188171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.797 [2024-12-13 06:42:22.188203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.188493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.188525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.188733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.188765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.188945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.188977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.189226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.189446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.189488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.189743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.189772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.190001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.190250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.190280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.190480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.190510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.190787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.190818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.191100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.191130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.191307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.191343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.191483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.191514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.191716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.191745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.192014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.192044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.192245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.192274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.192512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.192544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.192768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.192800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.192990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.193021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.193289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.193320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.193571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.193602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.193811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.193843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.194039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.194070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.194321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.194356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.194541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.194573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.194773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.194804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.195086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.195119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.195398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.195431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.195711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.195745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.195948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.195981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.196189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.196221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.196507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.196541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.196763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.196796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.197051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.197087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.197382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.197414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.197635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.197669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.197948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.197982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.198177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.198209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.798 qpair failed and we were unable to recover it. 00:36:30.798 [2024-12-13 06:42:22.198415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.798 [2024-12-13 06:42:22.198460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.198603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.198636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.198829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.198869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.199141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.199174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.199503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.199537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.199716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.199749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.199950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.199982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.200257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.200288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.200498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.200532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.200720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.200754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.201030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.201063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.201282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.201314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.201510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.201545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.201739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.201783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.201996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.202029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.202155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.202188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.202412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.202444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.202721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.202753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.203020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.203052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.203235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.203266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.203548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.203581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.203860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.203892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.204112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.204144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.204423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.204463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.204674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.204706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.204981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.205013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.205151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.205183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.205490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.205524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.205632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.205664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.205846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.205878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.206125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.206156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.206433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.206475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.206660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.206692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.206975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.207006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.207255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.207287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.207588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.207622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.207895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.207927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.208244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.208277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.208516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.208548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.799 qpair failed and we were unable to recover it. 00:36:30.799 [2024-12-13 06:42:22.208744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.799 [2024-12-13 06:42:22.208776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.208984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.209017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.209234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.209266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.209517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.209550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.209810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.209843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.210137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.210168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.210438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.210481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.210684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.210717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.210865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.210897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.211015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.211047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.211226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.211258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.211531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.211565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.211764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.211795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.211992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.212023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.212283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.212320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.212514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.212547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.212818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.212850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.213148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.213179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.213461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.213495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.213701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.213733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.213987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.214019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.214275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.214306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.214599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.214632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.214905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.214937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.215252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.215488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.215522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.215803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.215836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.216128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.216160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.216371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.216404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.216717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.216750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.216957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.216989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.217259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.217291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.217436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.217481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.217677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.217709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.800 [2024-12-13 06:42:22.217977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.800 [2024-12-13 06:42:22.218008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.800 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.218265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.218297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.218546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.218579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.218859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.218891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.219120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.219153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.219428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.219469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.219748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.219780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.220058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.220091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.220358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.220390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.220610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.220642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.220898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.220930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.221177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.221210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.221512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.221545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.221732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.221765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.222045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.222077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.222282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.222314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.222545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.222579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.222782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.222814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.223066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.223233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.223265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.223520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.223559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.223845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.223877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.224061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.224092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.224270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.224302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.224590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.224624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.224900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.224932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.225211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.225243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.225504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.225538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.225743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.225775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.226048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.226081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.226406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.226618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.226651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.226901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.226934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.227122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.227154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.227357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.227390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.227676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.227709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.228014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.228046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.228307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.228338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.228554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.228587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.801 [2024-12-13 06:42:22.228784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.801 [2024-12-13 06:42:22.228816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.801 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.229057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.229089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.229359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.229391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.229679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.229712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.229993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.230025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.230309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.230341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.230602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.230635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.230828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.230860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.231067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.231101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.231323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.231354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.231568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.231602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.231878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.231910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.232199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.232231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.232511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.232545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.232820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.232852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.233075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.233107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.233382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.233414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.233745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.233777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.233995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.234027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.234209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.234241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.234515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.234548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.234754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.234792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.235086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.235119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.235296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.235328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.235595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.235628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.235928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.235960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.236157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.236189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.236381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.236413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.236555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.236589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.236892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.236924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.237224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.237256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.237506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.237539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.237749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.237780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.237975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.238007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.238185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.238218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.238504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.238538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.238801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.238833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.239036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.239069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.239270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.239301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.802 [2024-12-13 06:42:22.239573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.802 [2024-12-13 06:42:22.239606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.802 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.239862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.239894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.240094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.240126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.240255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.240287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.240580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.240613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.240851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.240883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.241016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.241048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.241317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.241349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.241626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.241660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.241950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.241983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.242267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.242300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.242504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.242538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.242838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.242870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.243052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.243289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.243320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.243517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.243551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.243848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.243880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.244019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.244050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.244346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.244378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.244578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.244612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.244831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.244864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.245119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.245151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.245260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.245297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.245490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.245524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.245779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.245811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.246094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.246126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.246405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.246437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.246576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.246608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.246808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.246840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.247115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.247147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.247348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.247380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.247576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.247609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.247865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.247897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.248200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.248232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.248519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.248552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.248753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.248785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.248998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.249031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.249307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.249339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.249567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.249600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.803 [2024-12-13 06:42:22.249851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.803 [2024-12-13 06:42:22.249884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.803 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.250110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.250142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.250344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.250376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.250502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.250535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.250809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.250841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.251053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.251084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.251380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.251411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.251705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.251738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.251952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.251984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.252256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.252288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.252433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.252486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.252701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.252733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.253016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.253047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.253332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.253364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.253642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.253676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.253965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.253996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.254194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.254226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.254372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.254404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.254595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.254628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.254881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.254914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.255193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.255226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.255503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.255537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.255683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.255715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.255898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.255942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.256147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.256179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.256466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.256499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.256774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.256806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.256992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.257023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.257291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.257323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.257577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.257611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.257919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.257951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.258217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.258248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.258503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.258536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.258729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.258760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.259037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.259068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.259282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.259315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.259518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.259551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.259858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.259890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.260088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.804 [2024-12-13 06:42:22.260121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.804 qpair failed and we were unable to recover it. 00:36:30.804 [2024-12-13 06:42:22.260331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.260362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.260564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.260598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.260797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.260828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.261108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.261140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.261337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.261369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.261625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.261658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.261931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.261963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.262100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.262132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.262438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.262596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.262629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.262907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.262939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.263235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.263268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.263540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.263574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.263781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.263813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.264097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.264393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.264426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.264742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.264775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.265032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.265063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.265191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.265222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.265407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.265439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.265706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.265738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.266020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.266052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.266256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.266288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.266472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.266506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.266786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.266824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.267002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.267034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.267210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.267242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.267504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.267537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.267754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.268067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.268099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.268301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.268333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.268606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.268639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.268921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.268952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.269131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.269163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.269356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.269388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.805 [2024-12-13 06:42:22.269592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.805 [2024-12-13 06:42:22.269625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.805 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.269903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.269934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.270218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.270250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.270517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.270552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.270846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.270877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.271155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.271187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.271459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.271493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.271631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.271662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.271874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.271906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.272119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.272151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.272355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.272387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.272579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.272612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.272802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.272835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.273106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.273138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.273421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.273461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.273684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.273717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.273987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.274019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.274314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.274347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.274543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.274577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.274762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.274793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.274923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.274955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.275142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.275174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.275438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.275480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.275681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.275713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.275989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.276021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.276300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.276333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.276552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.276585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.276804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.276836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.277015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.277047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.277239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.277271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.277471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.277505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.277781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.277814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.278121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.278153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.278342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.278373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.278655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.278689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.278968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.279000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.279223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.279255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.279525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.279558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.279766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.806 [2024-12-13 06:42:22.279798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.806 qpair failed and we were unable to recover it. 00:36:30.806 [2024-12-13 06:42:22.279988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.280021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.280231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.280263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.280482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.280514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.280768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.280800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.281072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.281104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.281293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.281326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.281599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.281632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.281921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.281953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.282252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.282284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.282552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.282586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.282765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.282798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.283000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.283031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.283363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.283636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.283670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.283920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.283951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.284248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.284280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.284551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.284602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.284880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.284918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.285206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.285238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.285513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.285546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.285838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.285870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.286145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.286177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.286471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.286504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.286713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.286745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.286937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.286969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.287204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.287236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.287483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.287516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.287785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.287817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.288007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.288038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.288314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.288346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.288560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.288594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.288819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.288852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.289046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.289079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.289345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.289377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.289647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.289680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.289974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.290006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.290312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.290344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.290541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.290574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.290846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.807 [2024-12-13 06:42:22.290877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.807 qpair failed and we were unable to recover it. 00:36:30.807 [2024-12-13 06:42:22.291141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.291173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.291371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.291403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.291797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.291831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.292031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.292062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.292265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.292297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.292578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.292612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.292843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.292874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.293154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.293186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.293313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.293345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.293478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.293511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.293783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.293815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.294019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.294051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.294348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.294380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.294648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.294681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.294945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.294978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.295178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.295210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.295393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.295425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.295710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.295743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.295952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.295989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.296183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.296214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.296498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.296532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.296818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.296850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.297024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.297298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.297330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.297549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.297582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.297715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.297748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.297883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.297915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.298125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.298157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.298443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.298484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.298754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.298787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.299077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.299109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.299384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.299415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.299645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.299678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.299981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.300012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.300282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.300313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.300567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.300601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.300895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.301220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.301252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.301523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.808 [2024-12-13 06:42:22.301555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.808 qpair failed and we were unable to recover it. 00:36:30.808 [2024-12-13 06:42:22.301848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.301881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.302173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.302205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.302421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.302461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.302674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.302707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.303002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.303033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.303304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.303336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.303544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.303578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.303858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.303890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.304171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.304203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.304488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.304522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.304776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.304808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.305084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.305116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.305391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.305423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.305715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.305748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.305890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.305923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.306047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.306079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.306262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.306293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.306485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.306519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.306772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.306804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.306982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.307020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.307212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.307244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.307499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.307532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.307733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.307764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.307970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.308002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.308185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.308216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.308361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.308393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.308694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.308727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.309010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.309042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.309246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.309277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.309526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.309560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.309839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.309870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.310053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.310085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.310287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.310319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.310589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.809 [2024-12-13 06:42:22.310622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.809 qpair failed and we were unable to recover it. 00:36:30.809 [2024-12-13 06:42:22.310864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.310896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.311031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.311063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.311337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.311369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.311573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.311606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.311809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.311840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.312017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.312050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.312245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.312277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.312468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.312501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.312699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.312732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.313007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.313281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.313313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.313495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.313529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.313790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.313822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.314098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.314130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.314381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.314413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.314724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.314757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.315040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.315072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.315347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.315380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.315673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.315705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.315971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.316003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.316212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.316244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.316501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.316534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.316786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.316818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.317114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.317145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.317421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.317477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.317764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.317802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.317995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.318027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.318298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.318330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.318491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.318747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.318779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.319056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.319087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.319349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.319380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.319583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.319617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.319894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.319925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.320203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.320235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.320464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.320497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.320794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.320826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.321026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.321057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.321308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.810 [2024-12-13 06:42:22.321339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.810 qpair failed and we were unable to recover it. 00:36:30.810 [2024-12-13 06:42:22.321668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.321701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.321957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.321989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.322192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.322223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.322499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.322532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.322816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.322848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.323124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.323156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.323353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.323385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.323660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.323693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.323970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.324001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.324287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.324318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.324600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.324633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.324918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.324950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.325228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.325260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.325483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.325517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.325700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.325733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.326001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.326033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.326228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.326260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.326440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.326481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.326756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.326789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.326924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.326955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.327203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.327235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.327418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.327463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.327591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.327623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.327900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.327932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.328207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.328239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.328475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.328507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.328723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.328760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.328966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.328997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.329139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.329170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.329391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.329423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.329729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.329762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.329986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.330017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.330213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.330245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.330484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.330517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.330778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.330810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.331109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.331142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.331269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.331301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.331414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.331446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.331664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.811 [2024-12-13 06:42:22.331696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.811 qpair failed and we were unable to recover it. 00:36:30.811 [2024-12-13 06:42:22.331900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.331932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.332206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.332238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.332421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.332460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.332732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.332764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.332897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.332929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.333201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.333232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.333506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.333540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.333836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.333867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.334088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.334120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.334322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.334354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.334566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.334599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.334870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.334902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.335193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.335226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.335469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.335502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.335788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.335821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.336040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.336073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.336359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.336391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.336629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.336661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.336886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.336918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.337192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.337224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.337369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.337401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.337698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.337730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.337964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.337997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.338196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.338227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.338509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.338543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.338696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.338729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.338950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.338982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.339286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.339327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.339526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.339559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.339780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.339812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.340017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.340048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.340319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.340351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.340604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.340637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.340903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.340937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.341130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.341162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.341305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.341341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.341618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.341652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.341794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.341826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.342029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.812 [2024-12-13 06:42:22.342061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.812 qpair failed and we were unable to recover it. 00:36:30.812 [2024-12-13 06:42:22.342276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.342308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.342493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.342527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.342811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.342846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.343116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.343148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.343369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.343402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.343592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.343627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.343904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.343936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.344195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.344228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.344427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.344477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.344700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.344733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.344935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.344967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.345251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.345284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.345535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.345569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.345818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.345850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.346080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.346112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.346320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.346353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.346550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.346584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.346801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.347055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.347087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.347326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.347359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.347634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.347668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.347920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.347953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.348155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.348187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.348488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.348522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.348746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.348779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.349041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.349073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.349268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.349299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.349509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.349543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.349721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.349761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.350036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.350068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.350261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.350293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.350548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.350582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.350804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.350836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.351039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.351071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.351191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.813 [2024-12-13 06:42:22.351223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.813 qpair failed and we were unable to recover it. 00:36:30.813 [2024-12-13 06:42:22.351404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.351437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.351638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.351672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.351945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.351978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.352166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.352199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.352475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.352510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.352705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.352738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.352941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.352975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.353207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.353239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.353426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.353472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.353673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.353705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.353891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.353924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.354143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.354176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.354301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.354335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.354481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.354514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.354710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.354743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.355016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.355050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.355254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.355286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.355491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.355525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.355728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.355759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.355894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.355927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.356203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.356279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.356535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.356574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.356856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.356893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.357004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.357038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.357233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.357267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.357470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.357505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.357708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.357742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.357960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.357991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.358132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.358165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.358471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.358504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.358806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.358839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.359039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.359073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.359298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.359331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.359587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.359633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.359922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.359955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.360221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.360253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.360490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.360524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.360652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.360686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.360898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.814 [2024-12-13 06:42:22.360930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.814 qpair failed and we were unable to recover it. 00:36:30.814 [2024-12-13 06:42:22.361154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.361189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.361408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.361441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.361661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.361693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.361949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.361982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.362244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.362280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.362469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.362501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.362780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.362813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.363004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.363038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.363301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.363616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.363651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.363902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.363934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.364243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.364275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.364553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.364590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.364849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.364883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.365180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.365212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.365504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.365538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.365744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.365779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.366084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.366117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.366390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.366705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.366739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.366892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.366924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.367119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.367208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.367522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.367559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.367852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.367885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.368016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.368050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.368236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.368268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.368476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.368510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.368781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.368814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.369011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.369043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.369348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.369381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.369612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.369647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.369941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.369975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.370267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.370302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.370573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.370609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.370832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.370874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.371083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.371116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.371309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.371344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.371626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.371661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.371922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.815 [2024-12-13 06:42:22.371956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.815 qpair failed and we were unable to recover it. 00:36:30.815 [2024-12-13 06:42:22.372181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.372215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.372409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.372443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.372663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.372696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.372917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.372950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.373070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.373102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.373354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.373386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.373614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.373647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.373799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.373831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.374035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.374069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.374290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.374323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.374513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.374549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.374807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.374839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.374972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.375004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.375258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.375292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.375469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.375502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.375659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.375691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.375878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.375912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.376220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.376253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.376574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.376610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.376750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.376784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.377039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.377071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.377322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.377355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.377554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.377875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.377907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.378130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.378163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.378343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.378375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.378639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.378673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.378882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.378914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.379197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.379230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.379428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.379468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.379612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.379646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.379857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.379891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.380091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.380123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.380257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.380290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.380500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.380536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.380756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.380795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.380982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.381015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.381271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.381304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.381442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.816 [2024-12-13 06:42:22.381485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.816 qpair failed and we were unable to recover it. 00:36:30.816 [2024-12-13 06:42:22.381627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.381660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.381810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.381845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.381987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.382020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.382235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.382268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.382482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.382518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.382667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.382700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.382925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.382959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.383086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.383119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.383265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.383298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.383490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.383525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.383672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.383705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.383983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.384016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.384150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.384185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.384398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.384433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.384638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.384671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.384858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.384892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.385100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.385316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.385348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.385631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.385667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.385863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.385896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.386009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.386043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.386176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.386210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.386426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.386469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.386593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.386627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.386909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.386942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.387095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.387128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.387246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.387281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.387410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.387443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.387601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.387635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.387873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.387907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.388097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.388130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.388350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.388382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.388530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.388576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.388834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.388867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.389100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.389132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.389246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.389279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.389429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.389481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.389670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.389702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.389911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.817 [2024-12-13 06:42:22.389944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.817 qpair failed and we were unable to recover it. 00:36:30.817 [2024-12-13 06:42:22.390070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.390103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.390310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.390345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.390485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.390519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.390714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.390746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.390929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.390961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.391155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.391187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.391305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.391339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.391468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.391501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.391757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.391790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.392049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.392081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.392283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.392317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.392459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.392492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.392761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.392795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.393052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.393085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.393285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.393317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.393471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.393505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.393691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.393724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.393876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.393908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.394034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.394066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.394187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.394220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.394542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.394576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.394765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.394797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.394932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.394966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.395092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.395124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.395263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.395296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.395585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.395620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.395779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.395967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.395999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.396237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.396269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.396401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.396433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.396564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.396596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.396726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.396757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.396873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.396905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.397104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.397137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.818 [2024-12-13 06:42:22.397392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.818 [2024-12-13 06:42:22.397424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.818 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.397627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.397660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.397796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.397829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.397961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.398000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.398130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.398163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.398345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.398378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.398600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.398635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.398784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.398817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.399009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.399042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.399167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.399200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.399399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.399431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.399560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.399592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.399798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.399831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.400904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.400935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.401117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.401150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.401335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.401366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.401480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.401513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.401770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.401802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.402873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.402983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.403245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.403501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.403671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.403820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.403964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.403996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.404246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.404278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.404399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.404431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.404575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.404609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.404786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.404818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.819 [2024-12-13 06:42:22.405366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.819 [2024-12-13 06:42:22.405404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.819 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.405526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.405559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.405810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.405846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.406055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.406088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.406291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.406518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.406554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.406740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.406772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.406974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.407137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.407289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.407505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.407740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.407893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.407924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.408123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.408301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.408333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.408468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.408501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.408694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.408727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.408851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.408884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.409082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.409121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.409408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.409440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.409641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.409673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.409868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.409899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.410153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.410185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.410445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.410486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.410694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.410727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.410916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.410949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.411142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.411173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.411470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.411504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.411738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.411989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.412021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.412297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.412330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.412541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.412575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.412832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.413058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.413091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.413281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.413313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.413517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.413551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.413708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.413740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.413891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.413924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.414144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.414175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.414395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.820 [2024-12-13 06:42:22.414427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.820 qpair failed and we were unable to recover it. 00:36:30.820 [2024-12-13 06:42:22.414636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.414670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.414794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.414827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.415033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.415065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.415282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.415314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.415509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.415542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.415793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.415831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.416021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.416054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.416308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.416340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.416647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.416681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.416916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.416949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.417207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.417239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.417378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.417410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.417619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.417653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.417941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.417973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.418244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.418276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.418463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.418494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.418688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.418722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.418856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.418888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.419134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.419167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.419470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.419506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.419716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.419749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.419955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.419987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.420197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.420229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.420441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.420682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.420715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.420850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.420882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.421037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.421070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.421253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.421286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.421508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.421543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.421736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.421768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.421970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.422002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.422126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.422158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.422500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.422534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.422735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.422767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.423032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.423063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.423260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.423294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.423581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.423615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.423757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.423789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.423998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.424030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.821 [2024-12-13 06:42:22.424322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.821 [2024-12-13 06:42:22.424354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.821 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.424609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.424642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.424783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.424816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.425042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.425074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.425268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.425299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.425522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.425557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.425808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.425840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.426081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.426157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.426404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.426742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.426775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.427052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.427085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.427328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.427360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.427509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.427543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.427690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.427723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.427966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.427998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:30.822 [2024-12-13 06:42:22.428291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:30.822 [2024-12-13 06:42:22.428323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:30.822 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.428598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.428634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.428769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.428802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.429077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.429111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.429312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.429344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.429496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.429538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.429794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.429827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.429968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.430000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.430122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.430153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.430278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.430309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.430543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.430574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.430790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.430823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.431103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.431136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.431374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.431406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.431699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.431733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.431900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.432178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.432210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.432491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.432525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.432729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.432762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.433021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.433053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.433191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.433222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.433420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.433467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.433681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.433714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.433908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.433940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.434146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.434178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.434491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.434524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.434817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.434849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.435152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.435184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.435434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.435476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.435775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.435808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.436011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.436043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.436320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.436352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.436513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.436546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.436746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.436778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.437003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.437035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.437248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.100 [2024-12-13 06:42:22.437281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.100 qpair failed and we were unable to recover it. 00:36:31.100 [2024-12-13 06:42:22.437537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.437571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.437841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.437873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.438128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.438170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.438477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.438511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.438751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.438784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.438998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.439031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.439309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.439342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.439538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.439572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.439829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.439863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.440080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.440118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.440322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.440354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.440505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.440539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.440844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.440876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.441172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.441204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.441402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.441434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.441592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.441625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.441755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.441787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.442054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.442086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.442279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.442311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.442521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.442554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.442769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.442801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.443004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.443036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.443179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.443210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.443428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.443472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.443687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.443878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.443911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.444144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.444176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.444300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.444332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.444471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.444504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.444683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.444715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.444936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.444967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.445263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.445295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.445516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.445550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.445689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.445720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.445931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.445965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.446282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.446315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.446523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.446556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.446764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.446796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.101 [2024-12-13 06:42:22.446942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.101 [2024-12-13 06:42:22.446975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.101 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.447211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.447243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.447378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.447409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.447546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.447581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.447726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.447757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.447896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.447928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.448150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.448183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.448415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.448457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.448712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.448745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.449002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.449034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.449258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.449290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.449565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.449605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.449828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.449860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.450068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.450100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.450301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.450333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.450607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.450640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.450822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.450853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.451090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.451123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.451371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.451403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.451721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.451753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.452005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.452038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.452246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.452279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.452479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.452512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.452706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.452738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.452918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.452951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.453271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.453303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.453582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.453616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.453822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.453854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.454035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.454067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.454280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.454312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.454511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.454545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.454747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.454780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.455096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.455127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.455336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.455368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.455499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.455533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.455733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.455765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.455981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.456013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.456209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.456241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.456434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.102 [2024-12-13 06:42:22.456493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.102 qpair failed and we were unable to recover it. 00:36:31.102 [2024-12-13 06:42:22.456756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.456789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.456993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.457026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.457278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.457311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.457596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.457630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.457847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.457878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.458063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.458094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.458363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.458395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.458628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.458660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.458916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.458948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.459096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.459128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.459308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.459339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.459606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.459640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.459830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.459862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.460078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.460110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.460240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.460272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.460526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.460560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.460785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.460817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.461071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.461103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.461363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.461396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.461700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.461733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.462000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.462032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.462162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.462194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.462446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.462506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.462817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.462850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.463054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.463087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.463273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.463304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.463570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.463604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.463738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.463770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.464022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.464054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.464180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.464213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.464467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.464500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.464727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.464760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.465063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.465096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.465314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.465346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.465588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.465621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.465818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.465849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.466141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.466172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.466426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.466466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.466661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.103 [2024-12-13 06:42:22.466694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.103 qpair failed and we were unable to recover it. 00:36:31.103 [2024-12-13 06:42:22.466970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.467015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.467222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.467254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.467540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.467574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.467774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.467806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.468040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.468072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.468359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.468390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.468684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.468717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.468928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.469174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.469205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.469429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.469472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.469675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.469707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.469857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.469889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.470107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.470138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.470350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.470383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.470527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.470560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.470743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.470775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.470969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.471002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.471295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.471327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.471656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.471690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.471894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.471926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.472059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.472091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.472367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.472399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.472680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.472713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.472974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.473006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.473241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.473273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.473416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.473472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.473679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.473712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.473920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.473953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.474291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.474323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.474610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.474643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.474837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.474870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.475051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.104 [2024-12-13 06:42:22.475082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.104 qpair failed and we were unable to recover it. 00:36:31.104 [2024-12-13 06:42:22.475282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.475314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.475468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.475501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.475636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.475667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.475876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.475908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.476164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.476195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.476410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.476442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.476661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.476693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.476944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.476976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.477206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.477243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.477560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.477594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.477821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.477853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.478057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.478089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.478370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.478402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.478668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.478701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.478905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.478936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.479175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.479206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.479480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.479513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.479709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.479741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.479940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.479972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.480254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.480287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.480495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.480528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.480779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.480811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.481018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.481050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.481333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.481365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.481583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.481616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.481891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.481924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.482202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.482235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.482531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.482564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.482838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.482869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.483071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.483103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.483361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.483393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.483686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.483718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.484035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.484068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.484267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.484299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.484585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.484618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.484894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.484927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.485214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.485246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.485530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.485563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.105 [2024-12-13 06:42:22.485844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.105 [2024-12-13 06:42:22.485876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.105 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.486167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.486199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.486491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.486525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.486672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.486704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.486896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.486928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.487148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.487180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.487401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.487432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.487648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.487681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.487893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.487925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.488069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.488102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.488374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.488413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.488643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.488677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.488879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.488911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.489159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.489190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.489470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.489504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.489786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.489818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.490096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.490128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.490279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.490310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.490551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.490585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.490903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.490935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.491182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.491214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.491345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.491377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.491665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.491697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.491891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.491924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.492164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.492197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.492468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.492516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.492658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.492965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.492997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.493287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.493320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.493515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.493547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.493713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.493745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.493937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.493969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.494297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.494329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.494526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.494559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.494775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.494807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.495000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.495032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.495173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.495205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.495484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.495518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.495724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.495756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.106 [2024-12-13 06:42:22.495950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.106 [2024-12-13 06:42:22.495981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.106 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.496194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.496227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.496341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.496374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.496563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.496597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.496791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.496823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.497015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.497047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.497312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.497345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.497538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.497571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.497777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.497809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.498058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.498090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.498299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.498331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.498535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.498575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.498770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.498803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.498996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.499027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.499265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.499297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.499492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.499525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.499803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.499837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.499987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.500019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.500221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.500253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.500472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.500504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.500762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.500794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.500997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.501029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.501302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.501334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.501564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.501597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.501876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.501908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.502122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.502155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.502381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.502413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.502676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.502711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.502989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.503021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.503291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.503324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.503576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.503611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.503743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.503775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.503977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.504009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.504140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.504172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.504470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.504505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.504760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.504793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.504978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.505009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.505153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.505185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.505469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.505502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.107 [2024-12-13 06:42:22.505704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.107 [2024-12-13 06:42:22.505736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.107 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.505858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.505890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.506089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.506122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.506378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.506410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.506619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.506652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.506916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.506948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.507281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.507313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.507582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.507615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.507884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.507916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.508115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.508147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.508339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.508371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.508651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.508684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.508835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.508872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.509005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.509038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.509299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.509332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.509610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.509643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.509847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.509879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.510137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.510169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.510303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.510335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.510624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.510657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.510842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.510873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.511097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.511129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.511312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.511345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.511610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.511643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.511850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.511882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.512088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.512120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.512399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.512431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.512680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.512713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.512864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.512896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.513097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.513129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.513312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.513343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.513553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.513587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.513850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.513882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.514023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.514056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.514207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.514239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.514469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.514502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.514708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.514741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.514885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.514916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.108 qpair failed and we were unable to recover it. 00:36:31.108 [2024-12-13 06:42:22.515119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.108 [2024-12-13 06:42:22.515151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.515369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.515402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.515552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.515586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.515734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.515766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.515906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.515939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.516157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.516189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.516407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.516440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.516600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.516632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.516790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.516821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.516936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.516968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.517130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.517387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.517419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.517575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.517608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.517869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.517901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.518082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.518125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.518344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.518376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.518603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.518636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.518781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.518813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.519017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.519049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.519243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.519275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.519528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.519561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.519764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.519796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.519948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.519981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.520271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.520304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.520491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.520524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.520660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.520692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.520914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.520946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.521079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.521111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.521403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.521435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.521751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.521784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.521931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.521964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.522294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.522326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.522524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.522558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.522703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.522734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.522881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.522913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.109 qpair failed and we were unable to recover it. 00:36:31.109 [2024-12-13 06:42:22.523113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.109 [2024-12-13 06:42:22.523144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.523343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.523375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.523632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.523666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.523965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.523997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.524307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.524339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.524613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.524646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.524801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.524833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.525017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.525049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.525179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.525211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.525349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.525381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.525653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.525686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.525900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.525932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.526197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.526228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.526474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.526507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.526618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.526650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.526840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.526871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.527126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.527158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.527301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.527333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.527580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.527613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.527814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.527852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.528061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.528092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.528332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.528366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.528585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.528619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.528888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.528921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.529199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.529231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.529442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.529482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.529740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.529771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.529974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.530006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.530261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.530293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.530504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.530538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.530806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.530839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.531140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.531173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.531396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.531427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.531665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.531698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.531847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.531879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.532082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.532114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.532251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.532283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.532552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.532585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.532710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.110 [2024-12-13 06:42:22.532742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.110 qpair failed and we were unable to recover it. 00:36:31.110 [2024-12-13 06:42:22.532992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.533025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.533213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.533244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.533438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.533479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.533735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.533767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.533987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.534020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.534235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.534267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.534519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.534552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.534757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.535044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.535077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.535353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.535385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.535585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.535618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.535805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.535837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.535961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.535993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.536118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.536150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.536355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.536388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.536589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.536623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.536764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.536796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.537023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.537055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.537283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.537315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.537517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.537550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.537686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.537724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.537977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.538009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.538268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.538300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.538502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.538536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.538740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.538772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.538980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.539012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.539134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.539166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.539373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.539405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.539638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.539671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.539852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.539884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.540038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.540070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.540269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.540301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.540501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.540534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.540786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.540818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.541071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.541104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.541379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.541411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.541603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.541636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.541914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.541945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.542248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.111 [2024-12-13 06:42:22.542279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.111 qpair failed and we were unable to recover it. 00:36:31.111 [2024-12-13 06:42:22.542493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.542526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.542731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.542763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.542982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.543014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.543207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.543238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.543462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.543495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.543680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.543712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.543907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.543940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.544242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.544273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.544538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.544572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.544835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.544867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.545078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.545109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.545327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.545359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.545622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.545655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.545869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.545901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.546182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.546214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.546483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.546517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.546732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.546764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.546905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.546937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.547245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.547277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.547506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.547539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.547735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.547767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.547968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.548006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.548284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.548316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.548535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.548568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.548762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.548794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.548937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.548969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.549176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.549208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.549397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.549429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.549640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.549672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.549885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.549918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.550152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.550183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.550492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.550525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.550809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.550842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.551125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.551157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.551298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.551330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.551532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.551565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.551838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.551871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.552147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.552180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.552382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.552413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.112 qpair failed and we were unable to recover it. 00:36:31.112 [2024-12-13 06:42:22.552574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.112 [2024-12-13 06:42:22.552607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.552813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.552846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.553036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.553068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.553293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.553326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.553525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.553558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.553864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.553896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.554040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.554072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.554268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.554301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.554464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.554497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.554644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.554676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.554860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.554892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.555186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.555219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.555492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.555527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.555719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.555750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.556004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.556036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.556244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.556276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.556496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.556529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.556830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.556862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.557153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.557186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.557466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.557498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.557700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.557732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.557885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.557917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.558202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.558241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.558463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.558496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.558698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.558731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.558983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.559015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.559289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.559322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.559552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.559586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.559865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.559897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.560053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.560085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.560388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.560421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.560744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.560776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.560974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.561006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.561219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.561251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.113 [2024-12-13 06:42:22.561462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.113 [2024-12-13 06:42:22.561494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.113 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.561763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.561795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.561990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.562023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.562230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.562262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.562492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.562525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.562718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.562751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.563029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.563061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.563252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.563283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.563472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.563505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.563772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.563804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.564010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.564042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.564337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.564369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.564595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.564628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.564823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.564855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.564989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.565021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.565339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.565372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.565647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.565681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.565879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.565911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.566177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.566210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.566484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.566672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.566704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.566907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.566939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.567169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.567202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.567394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.567427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.567641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.567675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.567881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.567916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.568218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.568251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.568473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.568506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.568771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.568814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.569010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.569042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.569198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.569231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.569431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.569486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.569726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.569759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.569957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.569989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.570271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.570305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.570591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.570627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.570902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.570934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.571254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.571286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.571482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.571515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.114 qpair failed and we were unable to recover it. 00:36:31.114 [2024-12-13 06:42:22.571665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.114 [2024-12-13 06:42:22.571697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.571899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.571931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.572176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.572208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.572497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.572531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.572804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.572837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.573147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.573180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.573424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.573464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.573621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.573653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.573844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.573876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.574112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.574144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.574443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.574486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.574694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.574726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.574858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.574890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.575123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.575154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.575410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.575442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.575609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.575643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.575973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.576050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.576279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.576316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.576639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.576673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.576829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.576862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.577077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.577109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.577309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.577341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.577545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.577578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.577804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.577836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.578023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.578055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.578359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.578391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.578575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.578608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.578754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.578786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.579037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.579069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.579291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.579332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.579548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.579582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.579788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.579821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.580122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.580154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.580347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.580380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.580617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.580650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.580852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.580883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.581088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.581120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.581386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.581418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.581679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.581713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.115 qpair failed and we were unable to recover it. 00:36:31.115 [2024-12-13 06:42:22.581910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.115 [2024-12-13 06:42:22.581942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.582152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.582183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.582376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.582408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.582559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.582592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.582817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.582850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.582984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.583017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.583214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.583246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.583521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.583554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.583700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.583731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.583980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.584013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.584215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.584247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.584554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.584588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.584739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.584772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.584962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.584994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.585185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.585218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.585505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.585538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.585670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.585702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.585865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.585898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.586053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.586085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.586295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.586327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.586575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.586609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.586803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.586835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.587137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.587169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.587356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.587388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.587625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.587659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.587938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.587970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.588276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.588308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.588551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.588585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.588731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.588763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.588911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.588943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.589070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.589102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.589363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.589396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.589674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.589707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.589922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.589955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.590246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.590279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.590581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.590614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.590762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.590794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.590952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.590984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.591204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.591236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.116 qpair failed and we were unable to recover it. 00:36:31.116 [2024-12-13 06:42:22.591424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.116 [2024-12-13 06:42:22.591463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.591599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.591631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.591832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.591865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.592065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.592101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.592304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.592336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.592538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.592574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.592712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.592747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.592893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.592928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.593190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.593227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.593516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.593551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.593842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.593874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.594028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.594061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.594195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.594227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.594477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.594512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.594712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.594746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.594904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.594938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.595151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.595183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.595391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.595424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.595637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.595677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.595873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.595905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.596132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.596164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.596366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.596399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.596661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.596697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.596959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.596991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.597289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.597321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.597596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.597633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.597889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.597923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.598200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.598236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.598390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.598425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.598635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.598668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.598866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.598899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.599063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.599095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.600584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.600643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.600875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.600908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.117 [2024-12-13 06:42:22.601140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.117 [2024-12-13 06:42:22.601173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.117 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.601410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.601444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.601701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.601735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.601925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.601959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.602100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.602131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.602270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.602302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.602525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.602560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.602764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.602796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.602979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.603015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.603214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.603246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.603439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.603483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.603704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.603738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.603859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.603892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.604154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.604186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.604390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.604422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.604631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.604665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.604890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.604925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.605074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.605106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.605365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.605397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.605544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.605583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.605848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.605881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.606099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.606132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.606321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.606353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.606487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.606521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.606708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.606749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.606955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.606989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.607188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.607222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.607426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.607469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.607665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.607697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.607958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.607990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.608191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.608224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.608440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.608483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.608614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.608647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.608783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.608816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.609020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.609054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.609271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.609303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.609497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.609531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.609740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.609773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.609889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.118 [2024-12-13 06:42:22.609921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.118 qpair failed and we were unable to recover it. 00:36:31.118 [2024-12-13 06:42:22.610147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.610179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.610372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.610403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.610632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.610667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.610864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.610898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.611175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.611207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.611326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.611359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.611545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.611580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.611766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.611800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.611955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.611988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.612205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.612237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.612362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.612395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.612547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.612581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.612805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.612839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.613093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.613127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.613270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.613301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.615443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.615524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.615754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.615790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.615997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.616031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.616211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.616244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.616433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.616483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.616617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.616651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.616827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.616861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.617054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.617086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.617266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.617300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.617461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.617494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.617699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.617743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.617866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.617899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.618153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.618186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.618444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.618503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.618651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.618685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.618894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.618928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.619066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.619099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.619234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.619268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.619478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.619513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.619660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.619693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.619902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.619936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.620204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.620238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.620381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.119 [2024-12-13 06:42:22.620414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.119 qpair failed and we were unable to recover it. 00:36:31.119 [2024-12-13 06:42:22.620550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.620584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.620708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.620740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.620854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.620886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.621014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.621048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.621301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.621334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.621524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.621558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.621688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.621720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.621974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.622133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.622303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.622469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.622691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.622936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.622968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.623106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.623140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.623334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.623371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.623502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.623538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.623736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.623769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.623974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.624149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.624365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.624522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.624708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.624863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.624896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.625112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.625144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.625363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.625396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.625541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.625575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.625687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.625719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.625829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.625869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.626088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.626253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.626285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.626420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.626474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.626667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.626700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.626889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.626922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.627095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.627334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.627552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.627715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.627869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.627981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.120 [2024-12-13 06:42:22.628014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.120 qpair failed and we were unable to recover it. 00:36:31.120 [2024-12-13 06:42:22.628220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.628253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.628417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.628622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.628656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.628788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.628821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.629921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.629955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.630137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.630170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.630314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.630348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.630497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.630531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.630676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.630711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.630905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.630939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.631069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.631103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.631296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.631329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.631517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.631550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.631771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.631806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.635628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.635686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.635914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.635949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.636149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.636183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.636364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.636398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.636644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.636681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.636802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.636835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.636962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.636995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.637195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.637229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.637419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.637467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.637662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.637703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.637907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.637939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.638117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.638150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.638292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.638324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.638531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.638566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.638769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.638802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.639006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.639044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.639241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.639274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.639407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.639440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.639706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.639739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.121 [2024-12-13 06:42:22.639940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.121 [2024-12-13 06:42:22.639974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.121 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.640190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.640223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.640348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.640380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.640615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.640649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.640778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.640809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.640991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.641024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.641162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.641194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.641407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.641447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.641681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.641715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.641972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.642188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.642347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.642516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.642789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.642934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.642966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.643160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.643195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.643410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.643445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.643655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.643691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.643828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.643859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.644061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.644094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.644295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.644328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.644542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.644670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.644708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.644838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.644869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.645886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.645918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.646189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.646227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.646375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.646407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.646616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.646650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.646880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.646914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.647039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.122 [2024-12-13 06:42:22.647073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.122 qpair failed and we were unable to recover it. 00:36:31.122 [2024-12-13 06:42:22.647204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.647243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.647355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.647387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.647528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.647562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.647669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.647714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.647905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.647937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.648888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.648919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.649193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.649226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.649344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.649377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.649508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.649542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.649749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.649782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.650028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.650061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.650244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.650275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.650466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.650507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.650706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.650738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.650920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.650952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.651122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.651154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.651270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.651302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.651532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.651566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.651765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.651797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.651989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.652021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.652227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.652259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.652460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.652493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.652692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.652725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.652847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.652878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.653064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.653097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.653295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.653327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.653575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.653608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.653819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.653851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.654029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.654062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.654241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.654272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.654383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.654420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.654628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.654661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.654847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.654879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.123 [2024-12-13 06:42:22.655078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.123 [2024-12-13 06:42:22.655110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.123 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.655247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.655279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.655546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.655580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.655773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.655805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.655996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.656028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.656157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.656188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.656313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.656344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.656621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.656653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.656924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.656956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.657256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.657287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.657468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.657500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.657752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.657784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.657979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.658011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.658258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.658290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.658437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.658476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.658724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.658756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.659067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.659098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.659383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.659414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.659624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.659657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.659850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.659881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.660005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.660036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.660164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.660196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.660402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.660433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.660756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.660789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.661021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.661054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.661354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.661385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.661649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.661682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.661875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.661907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.662131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.662163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.662461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.662493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.662741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.662774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.663034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.663066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.663246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.663278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.663524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.663557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.663701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.663732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.663935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.663967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.664170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.664202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.664505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.664545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.664745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.664777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.124 [2024-12-13 06:42:22.664940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.124 [2024-12-13 06:42:22.664971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.124 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.665189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.665221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.665497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.665530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.665803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.665835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.666118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.666150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.666468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.666502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.666795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.666827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.667009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.667040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.667238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.667270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.667564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.667596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.667832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.667863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.668064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.668095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.668376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.668408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.668623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.668656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.668778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.668809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.668951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.668982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.669112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.669144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.669414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.669445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.669676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.669708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.669909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.669941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.670132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.670163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.670462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.670496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.670633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.670665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.670862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.670893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.671100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.671132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.671374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.671406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.671593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.671626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.671897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.671929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.672035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.672066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.672251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.672283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.672549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.672583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.672778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.672810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.673013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.673045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.673391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.673423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.673639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.673672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.673820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.673851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.674126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.674158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.674335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.674367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.674495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.125 [2024-12-13 06:42:22.674533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.125 qpair failed and we were unable to recover it. 00:36:31.125 [2024-12-13 06:42:22.674811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.674843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.675082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.675115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.675324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.675357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.675620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.675652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.675900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.675933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.676208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.676239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.676361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.676393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.676551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.676585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.676727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.676758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.676957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.676989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.677261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.677293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.677539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.677573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.677714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.677747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.678030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.678064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.678346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.678378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.678654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.678687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.678981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.679014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.679233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.679264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.679384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.679416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.679666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.679700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.679949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.679981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.680298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.680331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.680492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.680526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.680731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.680763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.680919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.680951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.681100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.681132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.681356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.681390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.681687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.681719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.681925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.681957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.682159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.682191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.682470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.682503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.682705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.682737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.682885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.682917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.683178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.683210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.126 qpair failed and we were unable to recover it. 00:36:31.126 [2024-12-13 06:42:22.683413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.126 [2024-12-13 06:42:22.683446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.683656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.683689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.683994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.684026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.684315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.684347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.684545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.684578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.684776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.684813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.685049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.685081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.685333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.685366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.685633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.685666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.685872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.685905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.686170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.686202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.686479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.686512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.686741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.686774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.686916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.686947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.687146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.687178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.687465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.687498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.687771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.687803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.687954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.687986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.688211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.688243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.688532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.688565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.688731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.688763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.688967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.689000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.689139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.689170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.689396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.689428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.689646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.689680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.689879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.689910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.690139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.690171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.690357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.690390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.690547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.690580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.690808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.690840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.691097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.691130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.691343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.691375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.691646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.691680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.691918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.691951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.692176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.692208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.692514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.692547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.692754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.692787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.692942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.692974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.693185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.127 [2024-12-13 06:42:22.693216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.127 qpair failed and we were unable to recover it. 00:36:31.127 [2024-12-13 06:42:22.693425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.693466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.693666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.693698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.693970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.694002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.694325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.694358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.694643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.694676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.694801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.694833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.695042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.695080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.695286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.695318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.695625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.695658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.695795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.696042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.696074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.696324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.696355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.696580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.696614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.696832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.696864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.697118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.697149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.697412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.697445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.697661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.697694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.697842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.697874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.698030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.698063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.698259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.698292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.698577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.698610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.698842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.698874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.699070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.699102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.699385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.699416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.699601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.699634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.699792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.699824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.700046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.700078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.700275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.700308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.700501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.700533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.700798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.700830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.701089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.701121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.701328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.701359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.701617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.701650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.701914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.701948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.702093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.702125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.702337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.702368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.702640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.702673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.702820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.702851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.703129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.128 [2024-12-13 06:42:22.703161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.128 qpair failed and we were unable to recover it. 00:36:31.128 [2024-12-13 06:42:22.703376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.703409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.703695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.703727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.703877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.703909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.704205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.704238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.704378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.704409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.704626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.704659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.704872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.704905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.705118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.705156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.705431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.705473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.705697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.705730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.706006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.706038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.706286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.706318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.706507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.706541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.706744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.706775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.707026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.707058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.707248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.707281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.707466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.707498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.707644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.707676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.707874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.707907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.708169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.708200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.708437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.708479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.708635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.708669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.708922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.708954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.709207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.709239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.709386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.709419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.709630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.709664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.709869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.709900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.710050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.710083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.710296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.710328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.710619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.710653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.710842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.710874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.711075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.711106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.711358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.711390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.711609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.711643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.711804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.711837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.712043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.712075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.712353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.712386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.712582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.712615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.712873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.129 [2024-12-13 06:42:22.712905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.129 qpair failed and we were unable to recover it. 00:36:31.129 [2024-12-13 06:42:22.713286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.713318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.713593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.713625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.713846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.713878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.714014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.714047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.714253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.714284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.714544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.714577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.714707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.714740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.715018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.715050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.715240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.715271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.715538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.715572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.715723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.715755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.715882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.715914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.716029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.716063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.716247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.716277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.716481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.716514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.716670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.716702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.716928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.717104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.717136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.717319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.717351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.717560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.717593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.717876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.717908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.718056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.718088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.718299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.718331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.718515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.718548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.718695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.718726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.718953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.718986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.719140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.719171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.719367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.719399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.719644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.719676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.719877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.719909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.720228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.720260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.720462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.720494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.720696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.720729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.720932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.720964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.721280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.721312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.721439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.721485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.721634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.721666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.721921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.721953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.130 [2024-12-13 06:42:22.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.130 [2024-12-13 06:42:22.722255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.130 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.722505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.722786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.722818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.723018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.723049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.723263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.723295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.723519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.723552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.723807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.723839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.723974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.724005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.724285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.724317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.724517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.724550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.724804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.724835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.724992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.725025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.725342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.725375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.725651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.725684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.725901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.725932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.726155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.726187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.726403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.726435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.726648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.726680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.726899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.726930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.727195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.727227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.727407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.727438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.727725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.727758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.727961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.727993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.728277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.728309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.728614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.728648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.728792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.728824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.729030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.729061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.729175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.729207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.729445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.729485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.729751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.729784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.729988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.730020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.730158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.730190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.730471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.730651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.730683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.730839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.131 [2024-12-13 06:42:22.730871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.131 qpair failed and we were unable to recover it. 00:36:31.131 [2024-12-13 06:42:22.731057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.731090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.731288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.731320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.731521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.731564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.731792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.731824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.732029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.732060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.732358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.732390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.732556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.732589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.732746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.732777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.732964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.732996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.733249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.733281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.733529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.733563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.733764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.733796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.733986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.734017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.734329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.734361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.734540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.734573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.734790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.734822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.734963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.734995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.735259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.735292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.735473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.735507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.132 [2024-12-13 06:42:22.735791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.132 [2024-12-13 06:42:22.735824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.132 qpair failed and we were unable to recover it. 00:36:31.410 [2024-12-13 06:42:22.736155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.410 [2024-12-13 06:42:22.736186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.736505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.736538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.736743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.736775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.736992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.737023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.737236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.737268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.737560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.737594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.737872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.737903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.738079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.738306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.738337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.738550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.738583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.738724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.738755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.738909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.738941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.739136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.739167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.739463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.739497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.739694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.739726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.739918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.739950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.740313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.740346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.740489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.740521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.740771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.740802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.741041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.741295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.741327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.741536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.741569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.741775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.741813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.741950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.741982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.742100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.742131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.742378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.742409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.742623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.742656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.742857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.742889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.743032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.743063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.743334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.743366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.743639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.743673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.743805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.743837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.744159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.744191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.744333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.744365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.744646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.744678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.744887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.744918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.745070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.745102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.745387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.411 [2024-12-13 06:42:22.745419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.411 qpair failed and we were unable to recover it. 00:36:31.411 [2024-12-13 06:42:22.745653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.745686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.745817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.745849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.745974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.746006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.746235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.746267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.746548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.746581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.746737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.746768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.747023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.747054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.747280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.747312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.747465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.747497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.747751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.747784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.747925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.747956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.748241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.748272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.748524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.748557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.748809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.748840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.749038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.749069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.749267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.749298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.749481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.749514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.749663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.749695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.749900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.749932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.750072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.750104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.750305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.750337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.750600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.750633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.750836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.750870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.751051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.751083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.751367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.751634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.751667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.751916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.751947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.752248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.752281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.752566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.752600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.752804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.752837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.753042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.753073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.753268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.753300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.753505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.753538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.753665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.753696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.753890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.753921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.754182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.754214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.754486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.754519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.754701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.754733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.755015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.412 [2024-12-13 06:42:22.755048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.412 qpair failed and we were unable to recover it. 00:36:31.412 [2024-12-13 06:42:22.755241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.755272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.755536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.755569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.755767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.755798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.755981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.756013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.756218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.756250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.756508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.756541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.756655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.756687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.756884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.756916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.757194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.757226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.757482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.757516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.757720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.757751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.757973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.758004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.758262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.758294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.758496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.758529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.758741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.758773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.758959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.758991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.759252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.759285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.759587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.759620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.759760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.759792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.760015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.760048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.760237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.760269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.760602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.760635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.760842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.760874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.761077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.761109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.761232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.761264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.761577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.761616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.761920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.761951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.762278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.762309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.762540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.762573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.762754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.762786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.762922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.762954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.763280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.763310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.763528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.763561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.763707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.763738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.763932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.763963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.764350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.764382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.764583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.764616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.764754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.764786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.413 [2024-12-13 06:42:22.764980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.413 [2024-12-13 06:42:22.765011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.413 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.765293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.765325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.765512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.765544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.765672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.765704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.765915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.765947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.766146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.766177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.766301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.766333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.766595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.766628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.766924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.766955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.767225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.767257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.767565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.767598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.767764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.767796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.767925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.767956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.768185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.768217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.768414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.768446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.768687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.768720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.768921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.768952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.769180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.769212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.769410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.769442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.769661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.769693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.769820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.769852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.770056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.770088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.770293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.770325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.770604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.770638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.770791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.770823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.771089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.771121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.771319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.771351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.771547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.771587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.771718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.771750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.771952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.771986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.772183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.772214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.772322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.772354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.772556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.772589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.772735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.772767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.772921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.772953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.773102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.773134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.773411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.773443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.773611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.773643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.773775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.773808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.414 [2024-12-13 06:42:22.773943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.414 [2024-12-13 06:42:22.773974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.414 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.774124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.774155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.774354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.774386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.774521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.774554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.774746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.774778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.774960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.774991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.775292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.775324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.775595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.775630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.775846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.775878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.776150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.776183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.776459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.776493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.776637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.776669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.776901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.776933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.777219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.777251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.777462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.777495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.777692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.777723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.777940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.777973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.778254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.778286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.778410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.778441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.778651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.778683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.778906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.778937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.779288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.779320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.779567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.779601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.779786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.779817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.780145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.780176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.780359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.780390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.780638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.780900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.781082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.781124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.781328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.781360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.781520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.781554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.781757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.781789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.781998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.415 [2024-12-13 06:42:22.782030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.415 qpair failed and we were unable to recover it. 00:36:31.415 [2024-12-13 06:42:22.782238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.782270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.782530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.782563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.782688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.782719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.782913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.782944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.783205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.783238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.783404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.783435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.783586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.783618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.783828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.783859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.784060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.784092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.784278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.784311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.784577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.784609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.784834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.784866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.784996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.785027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.785236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.785268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.785521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.785553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.785758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.785790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.786013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.786045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.786225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.786257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.786376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.786408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.786637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.786670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.786855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.786886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.787173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.787205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.787488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.787521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.787823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.787855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.788049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.788081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.788275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.788307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.788504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.788537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.788730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.788761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.788913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.788945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.789080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.789111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.789361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.789393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.789518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.789550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.789750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.789783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.790004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.790036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.790163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.790194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.790331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.790369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.790602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.790635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.790838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.790871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.791118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.416 [2024-12-13 06:42:22.791149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.416 qpair failed and we were unable to recover it. 00:36:31.416 [2024-12-13 06:42:22.791421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.791462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.791675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.791708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.791841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.791872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.791999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.792031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.792160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.792194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.792476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.792508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.792714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.792746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.793013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.793046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.793338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.793369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.793643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.793677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.793889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.793921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.794099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.794131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.794341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.794373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.794506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.794539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.794819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.794852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.795145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.795177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.795457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.795490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.795639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.795670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.795969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.796000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.796289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.796321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.796570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.796603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.796810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.796842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.797041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.797073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.797284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.797317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.797516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.797550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.797771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.797803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.798029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.798062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.798339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.798371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.798663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.798697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.798988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.799019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.799175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.799207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.799386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.799418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.799663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.799695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.799846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.799877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.800133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.800166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.800363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.800395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.800584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.800623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.800777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.800808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.800946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.417 [2024-12-13 06:42:22.800978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.417 qpair failed and we were unable to recover it. 00:36:31.417 [2024-12-13 06:42:22.801288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.801319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.801590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.801623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.801825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.801857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.802085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.802117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.802392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.802425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.802560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.802593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.802772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.802804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.803043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.803075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.803349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.803381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.803573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.803605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.803810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.803842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.804126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.804159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.804363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.804395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.804651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.804684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.804890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.804923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.805055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.805086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.805359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.805391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.805595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.805629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.805908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.805939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.806180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.806211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.806473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.806506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.806690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.806722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.806996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.807028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.807226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.807258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.807505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.807539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.807797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.807828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.808076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.808108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.808251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.808283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.808502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.808535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.808845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.808877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.809195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.809227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.809480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.809514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.809670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.809702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.809978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.810010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.810286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.810321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.810466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.810500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.810692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.810724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.810917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.810955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.811119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.418 [2024-12-13 06:42:22.811151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.418 qpair failed and we were unable to recover it. 00:36:31.418 [2024-12-13 06:42:22.811333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.811365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.811568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.811601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.811799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.811830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.811949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.811981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.812238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.812270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.812551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.812583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.812778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.812811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.812948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.812980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.813226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.813258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.813480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.813513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.813724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.813756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.814010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.814042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.814316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.814349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.814536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.814568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.814772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.814804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.814998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.815031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.815223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.815255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.815437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.815481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.815733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.815765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.815948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.815979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.816186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.816218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.816406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.816438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.816654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.816686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.816903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.816935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.817183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.817216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.817397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.817494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.817735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.817772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.818077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.818110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.818388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.818421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.818637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.818671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.818875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.818907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.819110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.819143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.819387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.819418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.419 [2024-12-13 06:42:22.819652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.419 [2024-12-13 06:42:22.819686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.419 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.819946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.819978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.820264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.820295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.820602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.820635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.820893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.820926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.821227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.821269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.821554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.821588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.821785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.821816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.822096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.822128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.822335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.822367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.822667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.822701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.822971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.823004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.823315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.823347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.823602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.823635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.823840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.823872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.824074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.824106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.824295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.824327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.824540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.824573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.824773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.824805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.825004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.825037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.825310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.825343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.825601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.825635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.825826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.825858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.826004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.826037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.826233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.826265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.826534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.826567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.826842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.826874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.827169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.827201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.827486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.827522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.827801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.827833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.828145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.828178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.828470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.828503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.828704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.828737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.828947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.828979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.829207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.829239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.829513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.829548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.829756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.829789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.829916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.829948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.830203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.420 [2024-12-13 06:42:22.830235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.420 qpair failed and we were unable to recover it. 00:36:31.420 [2024-12-13 06:42:22.830534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.830567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.830837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.830869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.831139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.831170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.831467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.831501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.831679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.831711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.831992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.832025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.832301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.832339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.832621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.832655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.832878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.832910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.833053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.833086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.833207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.833239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.833516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.833550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.833852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.833885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.834152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.834184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.834474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.834508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.834785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.834817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.835096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.835128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.835344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.835377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.835631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.835665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.835857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.835889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.836106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.836139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.836393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.836425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.836635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.836669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.836946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.836978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.837123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.837155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.837408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.837441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.837658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.837690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.837976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.838007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.838288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.838321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.838639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.838889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.838922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.839123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.839155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.839434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.839477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.839726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.839759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.839953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.839985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.840192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.840227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.840489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.840523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.840803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.840835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.421 [2024-12-13 06:42:22.841123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.421 [2024-12-13 06:42:22.841155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.421 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.841353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.841384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.841670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.841703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.841929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.841961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.842261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.842291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.842490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.842522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.842658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.842689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.842979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.843012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.843307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.843339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.843482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.843519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.843769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.843802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.844070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.844102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.844285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.844316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.844575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.844609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.844870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.844903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.845130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.845164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.845470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.845504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.845711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.845746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.846000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.846034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.846330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.846364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.846639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.846674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.846948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.846981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.847173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.847206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.847388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.847422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.847685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.847717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.847924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.847957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.848154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.848186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.848463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.848498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.848686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.848722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.848878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.848911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.849165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.849197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.849388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.849420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.849570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.849603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.849859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.849892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.850180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.850212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.850351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.850390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.850665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.850700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.850972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.851005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.851262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.422 [2024-12-13 06:42:22.851602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.422 [2024-12-13 06:42:22.851635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.422 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.851827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.851858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.852057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.852090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.852302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.852335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.852519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.852553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.852719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.852751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.852970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.853002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.853287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.853319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.853622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.853655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.853880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.853913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.854108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.854140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.854335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.854368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.854624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.854658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.854857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.854889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.855105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.855138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.855437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.855484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.855686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.855719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.855974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.856006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.856136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.856169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.856372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.856405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.856675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.856708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.857007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.857039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.857245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.857277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.857475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.857509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.857811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.857843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.858098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.858131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.858425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.858474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.858755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.858786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.859054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.859087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.859300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.859331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.859515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.859549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.859828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.859862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.860112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.860143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.860431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.860479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.860682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.860715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.860916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.860948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.861226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.861265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.861573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.861608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.861863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.861896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.423 [2024-12-13 06:42:22.862085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.423 [2024-12-13 06:42:22.862119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.423 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.862321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.862354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.862549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.862583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.862787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.862820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.863078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.863112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.863255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.863287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.863588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.863623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.863909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.863944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.864140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.864174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.864425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.864470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.864695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.864729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.864863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.864895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.865045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.865078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.865201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.865239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.865430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.865475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.865697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.865729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.865927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.865960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.866263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.866297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.866555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.866589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.866805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.866838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.866968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.867000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.867204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.867237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.867465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.867500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.867711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.867744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.867935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.867968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.868160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.868194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.868418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.868495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.868776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.868810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.868956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.868988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.869289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.869324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.869602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.869636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.869891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.869923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.870198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.870233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.870508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.870543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.870792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.424 [2024-12-13 06:42:22.870824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.424 qpair failed and we were unable to recover it. 00:36:31.424 [2024-12-13 06:42:22.871008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.871041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.871311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.871346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.871564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.871604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.871753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.871786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.871913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.871947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.872143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.872177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.872366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.872400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.872610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.872644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.872898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.872931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.873081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.873228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.873492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.873659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.873822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.873974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.874009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.874265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.874298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.874464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.874498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.874622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.874655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.874929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.874962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.875223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.875258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.875488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.875524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.875648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.875682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.875892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.875924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.876036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.876069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.876256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.876289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.876499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.876534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.876737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.876770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.876920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.876952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.877262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.877462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.877496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.877772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.877806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.878074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.878107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.878389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.878423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.878586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.878620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.878747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.878780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.879056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.879088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.879311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.879344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.879636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.879669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.879872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.425 [2024-12-13 06:42:22.879904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.425 qpair failed and we were unable to recover it. 00:36:31.425 [2024-12-13 06:42:22.880170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.880204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.880466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.880500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.880771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.880804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.881013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.881052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.881163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.881196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.881376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.881409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.881621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.881654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.881857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.881890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.882166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.882199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.882320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.882352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.882474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.882507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.882713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.882747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.882953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.882986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.883181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.883213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.883357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.883389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.883716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.883751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.883953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.883986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.884246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.884278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.884543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.884578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.884764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.884796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.884923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.884956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.885139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.885173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.885423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.885470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.885727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.885766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.885985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.886020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.886234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.886267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.886464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.886498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.886699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.886732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.886930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.886963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.887230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.887263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.887504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.887539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.887734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.887767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.887970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.888003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.888126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.888160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.888345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.888377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.888650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.888685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.888936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.888970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.889167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.889200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.426 [2024-12-13 06:42:22.889341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.426 [2024-12-13 06:42:22.889374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.426 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.889569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.889604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.889839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.889872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.890129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.890163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.890361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.890395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.890617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.890657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.890772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.890805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.890985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.891017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.891199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.891233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.891363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.891395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.891599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.891631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.891814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.891846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.892032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.892064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.892188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.892220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.892474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.892508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.892776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.892809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.892933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.892964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.893184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.893216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.893403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.893436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.893638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.893671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.893865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.893899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.894103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.894135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.894318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.894350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.894543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.894577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.894776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.894809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.895061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.895094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.895351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.895384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.895601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.895635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.895828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.895860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.896075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.896107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.896306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.896339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.896518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.896553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.896748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.896782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.896911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.896943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.897215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.897247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.897377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.897411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.897705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.897739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.898011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.898044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.898249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.898281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.427 [2024-12-13 06:42:22.898475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.427 [2024-12-13 06:42:22.898510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.427 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.898654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.898686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.898818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.898853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.899063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.899097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.899316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.899348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.899624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.899658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.899803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.899841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.900067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.900229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.900461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.900687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.900862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.900994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.901027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.901224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.901258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.901366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.901398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.901606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.901643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.901839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.901873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.902056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.902089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.902281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.902315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.902589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.902623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.902892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.902926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.903208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.903242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.903515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.903548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.903800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.903833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.904110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.904142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.904397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.904430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.904648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.904684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.904959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.904993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.905173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.905205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.905399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.905432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.905587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.905620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.905895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.905928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.906062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.906095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.906307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.906349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.428 [2024-12-13 06:42:22.906602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.428 [2024-12-13 06:42:22.906636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.428 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.906890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.906922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.907142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.907174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.907447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.907491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.907686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.907718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.907897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.907929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.908127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.908161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.908433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.908508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.908737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.908945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.908976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.909197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.909229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.909354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.909386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.909544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.909584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.909802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.909834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.910045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.910077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.910188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.910219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.910400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.910431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.910590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.910623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.910867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.910900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.911078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.911112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.911399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.911433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.911636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.911668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.911910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.911941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.912186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.912218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.912509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.912544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.912816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.912849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.913075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.913107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.913378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.913410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.913573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.913607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.913796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.913828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.914022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.914053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.914183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.914214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.914404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.914437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.914657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.914689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.914881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.914913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.915095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.915126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.915325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.915356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.915576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.915609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.915806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.915838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.429 [2024-12-13 06:42:22.916130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.429 [2024-12-13 06:42:22.916206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.429 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.916578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.916620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.916832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.916867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.917083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.917115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.917385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.917418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.917642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.917676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.917921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.917953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.918222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.918254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.918548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.918582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.918756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.918788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.918939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.918971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.919244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.919276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.919460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.919493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.919749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.919781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.919930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.919963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.920241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.920273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.920460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.920494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.920747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.920780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.921038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.921069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.921289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.921321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.921596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.921629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.921813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.921845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.922137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.922169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.922435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.922481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.922686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.922718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.922836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.922868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.923069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.923100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.923370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.923407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.923601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.923634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.923772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.923804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.923997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.924029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.924241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.924273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.924487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.924520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.924785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.924817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.925074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.925106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.925366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.925396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.925584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.925617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.925864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.925897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.926098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.926131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.430 [2024-12-13 06:42:22.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.430 [2024-12-13 06:42:22.926422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.430 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.926717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.926756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.926956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.926987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.927113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.927145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.927393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.927425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.927644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.927678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.927869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.927901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.928099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.928132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.928270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.928301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.928525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.928559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.928875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.928906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.929045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.929077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.929262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.929294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.929508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.929541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.929735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.929767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.929976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.930008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.930198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.930229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.930505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.930539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.930820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.930852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.930990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.931022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.931257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.931289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.931483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.931517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.931655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.931687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.931955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.931987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.932257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.932290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.932476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.932509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.932744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.932775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.933024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.933056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.933248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.933286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.933501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.933534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.933721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.933752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.933943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.933976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.934248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.934279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.934492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.934525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.934790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.934822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.935058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.935090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.935271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.935303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.935505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.935539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.935670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.935702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.431 qpair failed and we were unable to recover it. 00:36:31.431 [2024-12-13 06:42:22.935986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.431 [2024-12-13 06:42:22.936018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.936262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.936294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.936507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.936541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.936817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.936849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.937096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.937128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.937376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.937407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.937633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.937666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.937913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.937945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.938207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.938388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.938420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.938622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.938654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.938923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.938955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.939201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.939234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.939413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.939445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.939680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.939713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.939904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.939936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.940122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.940154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.940403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.940436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.940626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.940659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.940906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.940938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.941124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.941155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.941425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.941468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.941748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.941780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.942077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.942109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.942401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.942433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.942676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.942709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.942927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.942960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.943188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.943220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.943400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.943432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.943695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.943740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.943994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.944026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.944274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.944306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.944604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.944638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.944911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.944942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.945072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.945103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.945305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.945338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.945460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.945494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.945701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.945733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.945878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.432 [2024-12-13 06:42:22.945911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.432 qpair failed and we were unable to recover it. 00:36:31.432 [2024-12-13 06:42:22.946190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.946223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.946425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.946466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.946584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.946616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.946830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.947116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.947147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.947413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.947446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.947730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.947763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.948018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.948050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.948267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.948299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.948495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.948529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.948711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.948743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.948868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.948900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.949077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.949108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.949295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.949327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.949622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.949655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.949855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.949886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.950143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.950175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.950396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.950429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.950635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.950668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.950939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.950971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.951174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.951205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.951427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.951470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.951607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.951639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.951845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.951878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.952089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.952121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.952304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.952335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.952634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.952668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.952886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.952918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.953114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.953146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.953265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.953297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.953481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.953521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.953735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.953767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.953947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.953979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.954173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.954206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.954407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.433 [2024-12-13 06:42:22.954438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.433 qpair failed and we were unable to recover it. 00:36:31.433 [2024-12-13 06:42:22.954606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.954639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.954925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.954958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.955091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.955123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.955339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.955371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.955629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.955663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.955930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.955962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.956166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.956198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.956405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.956437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.956727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.956760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.956959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.956991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.957261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.957293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.957422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.957467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.957736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.957768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.957958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.957990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.958210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.958242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.958436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.958496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.958693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.958725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.958941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.958974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.959255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.959286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.959493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.959528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.959651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.959683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.959895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.959926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.960136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.960169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.960372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.960405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.960634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.960667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.960852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.960884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.961023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.961055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.961197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.961229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.961433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.961477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.961762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.961794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.962002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.962035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.962236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.962268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.962483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.962516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.962763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.962795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.963045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.963079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.963225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.963263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.963529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.963564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.963777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.963809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.434 qpair failed and we were unable to recover it. 00:36:31.434 [2024-12-13 06:42:22.963922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.434 [2024-12-13 06:42:22.963954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.964096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.964127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.964258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.964291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.964507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.964541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.964832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.964863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.965045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.965077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.965291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.965324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.965523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.965556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.965753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.965785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.966053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.966084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.966351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.966383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.966528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.966562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.966835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.966866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.967054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.967085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.967272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.967304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.967440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.967486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.967675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.967707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.967975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.968006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.968254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.968286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.968471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.968504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.968623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.968655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.968778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.968810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.969049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.969081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.969280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.969312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.969511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.969716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.969749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.969875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.969908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.970030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.970061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.970255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.970288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.970545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.970580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.970878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.970910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.971018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.971051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.971242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.971273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.971582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.971615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.971875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.971906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.972036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.972068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.972206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.972237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.972417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.972462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.972668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.972700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.435 qpair failed and we were unable to recover it. 00:36:31.435 [2024-12-13 06:42:22.972830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.435 [2024-12-13 06:42:22.972862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.973074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.973107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.973298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.973329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.973580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.973614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.973763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.973795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.973995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.974026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.974204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.974236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.974369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.974401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.974691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.974725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.974916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.974948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.975139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.975170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.975444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.975491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.975747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.975779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.976067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.976100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.976384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.976417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.976683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.976716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.976966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.976999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.977187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.977219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.977393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.977426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.977549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.977582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.977795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.977827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.977963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.977995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.978241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.978273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.978552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.978585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.978840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.978871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.979009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.979041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.979243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.979276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.979394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.979425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.979685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.979718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.979984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.980016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.980282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.980314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.980612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.980645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.980863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.980894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.981138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.981170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.981366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.981398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.981653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.981687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.981876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.981908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.982090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.982122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.982325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.436 [2024-12-13 06:42:22.982363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.436 qpair failed and we were unable to recover it. 00:36:31.436 [2024-12-13 06:42:22.982642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.982676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.982927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.982959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.983150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.983182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.983366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.983399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.983592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.983625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.983751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.983782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.983965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.983997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.984116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.984149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.984277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.984309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.984587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.984621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.984821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.984852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.985961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.985993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.986263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.986295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.986495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.986529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.986657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.986688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.986816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.986851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.987049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.987080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.987201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.987233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.987513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.987547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.987746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.987780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.987904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.987936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.988135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.988168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.988288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.988319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.988468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.988502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.988682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.988713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.988906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.988938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.989140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.989172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.437 [2024-12-13 06:42:22.989387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.437 [2024-12-13 06:42:22.989419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.437 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.989555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.989588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.989798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.989830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.990003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.990035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.990213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.990244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.990490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.990524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.990791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.990823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.991019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.991060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.991305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.991337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.991604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.991637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.991756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.991787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.991894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.991925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.992133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.992164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.992358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.992389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.992574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.992606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.992740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.992771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.992890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.992922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.993103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.993133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.993351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.993382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.993566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.993599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.993801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.993833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.994019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.994051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.994324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.994355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.994544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.994577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.994769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.994801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.994987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.995226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.995397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.995574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.995715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.995952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.995984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.996099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.996130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.996303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.996335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.996462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.996495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.996815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.996888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.997174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.997210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.997407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.997439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.997671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.997704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.997896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.997929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.438 [2024-12-13 06:42:22.998125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.438 [2024-12-13 06:42:22.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.438 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.998291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.998323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.998571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.998605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.998736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.998767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.998947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.998979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.999108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.999141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.999262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.999293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.999474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.999507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.999695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.999739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:22.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:22.999900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.000101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.000254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.000411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.000578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.000808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.000988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.001021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.001230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.001267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.001392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.001424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.001650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.001684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.001899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.001931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.002044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.002076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.002317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.002349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.002510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.002544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.002721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.002753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.003035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.003068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.003198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.003230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.003409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.003442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.003659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.003692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.003870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.003902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.004027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.004058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.004305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.004337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.004480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.004512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.004715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.004747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.004880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.004912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.005164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.005196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.005387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.005419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.005608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.005641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.005836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.005868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.005990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.006023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.439 qpair failed and we were unable to recover it. 00:36:31.439 [2024-12-13 06:42:23.006266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.439 [2024-12-13 06:42:23.006298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.006498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.006532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.006674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.006706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.006827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.006859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.006984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.007197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.007353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.007519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.007671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.007903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.007947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.008069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.008101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.008238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.008271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.008465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.008498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.008617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.008649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.008858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.008890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.009069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.009102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.009275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.009308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.009512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.009545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.009749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.009781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.009965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.009998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.010208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.010242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.010357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.010390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.010523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.010557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.010696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.010728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.010922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.010954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.011084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.011116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.011234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.011266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.011515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.011548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.011744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.011776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.011966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.011998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.012242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.012274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.012383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.012415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.012589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.012622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.012794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.012826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.440 qpair failed and we were unable to recover it. 00:36:31.440 [2024-12-13 06:42:23.013830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.440 [2024-12-13 06:42:23.013862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.014039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.014071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.014251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.014283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.014406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.014438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.014627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.014660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.014775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.014807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.015005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.015037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.015240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.015272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.015373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.015405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.015601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.015634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.015818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.015856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.016084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.016285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.016519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.016723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.016861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.016968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.017197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.017348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.017556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.017696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.017913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.017945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.018133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.018164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.018339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.018370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.018562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.018596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.018703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.018735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.018913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.018945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.019149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.019180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.019357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.019389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.019518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.019551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.019814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.019845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.019956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.019988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.020110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.020142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.020252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.020284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.020405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.020436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.020557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.020589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.020781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.020812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.021057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.021089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.021261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.441 [2024-12-13 06:42:23.021293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.441 qpair failed and we were unable to recover it. 00:36:31.441 [2024-12-13 06:42:23.021541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.021573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.021751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.021782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.021964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.021996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.022119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.022151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.022327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.022359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.022641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.022832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.022864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.023125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.023157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.023428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.023470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.023655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.023686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.023809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.023841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.024082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.024120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.024358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.024390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.024514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.024546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.024729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.024760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.024946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.024978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.025108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.025140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.025427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.025468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.025654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.025686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.025791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.025822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.026042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.026190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.026399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.026856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.026968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.027105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.027274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.027436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.027664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.027887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.028046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.028078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.028195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.028226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.028416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.442 [2024-12-13 06:42:23.028639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.442 [2024-12-13 06:42:23.028671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.442 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.028850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.028881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.029118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.029150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.029335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.029367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.029559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.029593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.029767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.029798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.029925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.029957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.030087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.030119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.030230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.030262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.030503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.030536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.030655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.030686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.030927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.030959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.031093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.031124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.031300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.031332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.031446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.031487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.031699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.031730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.031864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.031896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.032946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.032977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.033093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.033125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.033318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.033349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.033524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.033557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.033727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.033759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.033972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.034003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.034216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.034248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.034502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.034535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.034722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.034754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.034964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.034995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.035208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.035239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.035447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.035490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.035613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.035645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.035831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.035863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.036048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.036080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.036271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.036302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.036446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.036485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.443 qpair failed and we were unable to recover it. 00:36:31.443 [2024-12-13 06:42:23.036684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.443 [2024-12-13 06:42:23.036716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.036858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.036889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.037059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.037107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.037283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.037315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.037444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.037486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.037668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.037700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.037873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.037905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.038170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.038201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.038374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.038406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.038585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.038619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.038793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.038824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.038962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.038993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.039097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.039129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.039250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.039281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.039460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.039491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.039739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.039771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.040010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.040041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.040232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.040264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.040472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.040509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.040797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.040829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.041963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.041995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.042115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.042147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.042328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.042360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.042530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.042563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.042747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.042779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.042893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.042925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.043111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.043143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.043279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.043312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.043428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.043468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.043638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.043670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.043909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.043941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.044125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.044156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.044407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.044439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.044593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.444 [2024-12-13 06:42:23.044625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.444 qpair failed and we were unable to recover it. 00:36:31.444 [2024-12-13 06:42:23.044737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.044768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.044953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.044984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.045167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.045198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.045325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.045356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.045561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.045593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.045768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.045799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.445 [2024-12-13 06:42:23.045979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.445 [2024-12-13 06:42:23.046016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.445 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.046339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.046371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.046539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.046573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.046758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.046789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.046960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.046991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.047141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.047334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.047365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.047474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.047507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.047682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.047714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.047818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.047849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.048117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.048149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.048352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.048383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.048646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.048678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.048796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.048827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.049057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.725 [2024-12-13 06:42:23.049090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.725 qpair failed and we were unable to recover it. 00:36:31.725 [2024-12-13 06:42:23.049308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.049340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.049541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.049573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.049839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.049870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.050014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.050046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.050222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.050254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.050432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.050490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.050705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.050737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.050976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.051008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.051138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.051169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.051407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.051439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.051630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.051662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.051859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.051891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.052087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.052118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.052299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.052330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.052465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.052498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.052763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.052794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.053930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.053962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.054068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.054099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.054382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.054413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.054560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.054592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.054776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.054813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.054927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.054960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.055064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.055096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.055269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.055300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.055570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.055604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.055788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.055820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.056057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.056089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.056224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.056256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.056514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.056547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.726 qpair failed and we were unable to recover it. 00:36:31.726 [2024-12-13 06:42:23.056741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.726 [2024-12-13 06:42:23.056772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.056907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.056938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.057147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.057179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.057455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.057488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.057741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.057773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.057977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.058009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.058219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.058250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.058430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.058491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.058632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.058663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.058943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.058974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.059092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.059123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.059294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.059326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.059502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.059535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.059709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.059740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.059910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.059942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.060062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.060093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.060284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.060316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.060530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.060563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.060758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.060789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.060911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.060942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.061116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.061147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.061341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.061372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.061554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.061586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.061766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.061797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.061973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.062004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.062266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.062297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.062510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.062542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.062670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.062702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.062878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.062910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.063096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.063128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.063364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.063396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.063647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.063686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.063876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.063907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.064035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.064066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.064291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.727 [2024-12-13 06:42:23.064323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.727 qpair failed and we were unable to recover it. 00:36:31.727 [2024-12-13 06:42:23.064519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.064552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.064679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.064710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.064944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.064976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.065151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.065182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.065363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.065395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.065577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.065609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.065718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.065749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.065932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.065964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.066155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.066188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.066317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.066348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.066526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.066559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.066825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.066856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.066988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.067021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.067281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.067312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.067426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.067466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.067734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.067767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.067956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.067987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.068182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.068214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.068335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.068366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.068485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.068518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.068731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.068763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.068965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.068996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.069204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.069236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.069504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.069536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.069654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.069685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.069881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.069913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.070088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.070119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.070410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.070441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.070648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.070680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.070818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.070849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.071041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.071073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.071267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.071299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.071563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.071595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.071779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.071811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.071992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.072023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.072229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.072261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.728 qpair failed and we were unable to recover it. 00:36:31.728 [2024-12-13 06:42:23.072527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.728 [2024-12-13 06:42:23.072565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.072683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.072715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.072840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.072871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.073076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.073108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.073288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.073320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.073505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.073538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.073745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.073777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.073966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.073997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.074170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.074201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.074341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.074372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.074551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.074583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.074768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.074800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.074984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.075015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.075255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.075287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.075489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.075521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.075638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.075670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.075782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.075814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.076108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.076139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.076405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.076437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.076654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.076686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.076804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.076835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.077023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.077055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.077294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.077325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.077585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.077618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.077751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.077782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.077972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.078003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.078188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.078220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.078439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.078498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.078759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.078791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.079040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.079071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.079243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.079274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.079482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.079515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.079762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.079793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.079914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.079946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.080209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.080241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.080436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.080475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.080659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.080690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.729 qpair failed and we were unable to recover it. 00:36:31.729 [2024-12-13 06:42:23.080954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.729 [2024-12-13 06:42:23.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.081119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.081150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.081341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.081373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.081551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.081590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.081717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.081749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.081922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.081953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.082144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.082176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.082459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.082491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.082682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.082714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.082898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.082930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.083056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.083087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.083276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.083308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.083551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.083583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.083837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.083869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.084049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.084081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.084200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.084231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.084483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.084515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.084711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.084743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.084931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.084963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.085140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.085171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.085298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.085330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.085439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.085483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.085723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.085754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.085944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.085976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.086220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.086251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.086499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.086532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.730 [2024-12-13 06:42:23.086714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.730 [2024-12-13 06:42:23.086745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.730 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.086981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.087012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.087264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.087295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.087478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.087510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.087767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.087799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.087979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.088010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.088190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.088222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.088486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.088518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.088620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.088652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.088784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.088815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.089075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.089106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.089299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.089331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.089501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.089534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.089712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.089743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.089989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.090156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.090319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.090471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.090701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.090845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.090876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.091046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.091078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.091268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.091300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.091511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.091543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.091787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.091818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.091992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.092024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.092137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.092169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.092458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.092489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.092750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.092782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.093051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.093083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.093279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.093311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.093427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.093466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.093676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.093709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.093971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.094003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.094127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.094158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.094395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.094428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.094741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.094775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.094979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.095011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.731 [2024-12-13 06:42:23.095136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.731 [2024-12-13 06:42:23.095167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.731 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.095342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.095375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.095667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.095702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.095876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.095909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.096035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.096067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.096202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.096234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.096423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.096463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.096660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.096693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.096936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.096968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.097097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.097130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.097302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.097334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.097572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.097604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.097776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.097814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.097946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.097977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.098169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.098201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.098325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.098357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.098544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.098576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.098769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.098801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.098990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.099023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.099232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.099264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.099428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.099631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.099665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.099783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.099815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.100005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.100037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.100222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.100255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.100463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.100496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.100740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.100771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.100897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.100929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.101109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.101141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.101328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.101362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.101544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.101577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.101838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.101870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.102963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.102995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.732 qpair failed and we were unable to recover it. 00:36:31.732 [2024-12-13 06:42:23.103237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.732 [2024-12-13 06:42:23.103269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.103442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.103483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.103750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.103782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.103899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.103931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.104190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.104222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.104341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.104373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.104477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.104509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.104625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.104658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.104841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.104873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.105013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.105045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.105223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.105256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.105467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.105499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.105681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.105713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.105885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.105918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.106031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.106062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.106300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.106331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.106435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.106493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.106692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.106724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.106852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.106883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.107141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.107174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.107277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.107308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.107550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.107583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.107717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.107755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.108792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.108994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.109026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.109152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.109184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.109425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.109464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.109649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.109681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.109965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.109997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.110135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.110167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.110403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.110435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.110709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.110741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.733 qpair failed and we were unable to recover it. 00:36:31.733 [2024-12-13 06:42:23.110935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.733 [2024-12-13 06:42:23.110968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.111158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.111189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.111367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.111401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.111548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.111580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.111702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.111733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.111994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.112025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.112308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.112345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.112626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.112661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.112843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.112876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.113007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.113044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.113163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.113195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.113385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.113424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.113617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.113663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.113894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.113929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.114052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.114084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.114371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.114416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.114612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.114644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.114763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.114795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.114998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.115036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.115238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.115270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.115467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.115501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.115676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.115709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.115886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.115918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.116038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.116069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.116243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.116275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.116378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.116417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.116667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.116701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.116904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.116936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.117042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.117074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.117345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.117377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.117550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.117583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.117705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.117738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.117860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.117892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.118150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.118181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.118309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.118341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.734 qpair failed and we were unable to recover it. 00:36:31.734 [2024-12-13 06:42:23.118530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.734 [2024-12-13 06:42:23.118564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.118706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.118738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.118863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.118895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.119012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.119044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.119276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.119310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.119481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.119514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.119707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.119740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.120034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.120201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.120349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.120578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.120783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.120994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.121027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.121226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.121258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.121469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.121507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.121642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.121674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.121903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.121935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.122088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.122121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.122292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.122323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.122564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.122599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.122715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.122748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.122935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.122969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.123225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.123257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.123438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.123488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.123718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.123888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.123920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.124054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.124086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.124274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.124306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.124444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.124484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.124594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.735 [2024-12-13 06:42:23.124627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.735 qpair failed and we were unable to recover it. 00:36:31.735 [2024-12-13 06:42:23.124749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.124791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.124968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.125120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.125334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.125477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.125776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.125915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.125947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.126127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.126159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.126271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.126302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.126472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.126506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.126726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.126758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.126935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.126968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.127145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.127177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.127298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.127329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.127467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.127500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.127672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.127704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.127880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.127913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.128091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.128389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.128423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.128693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.128725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.129004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.129036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.129152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.129183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.129376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.129409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.129671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.129704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.129953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.129987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.130166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.130199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.130351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.130385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.130588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.130622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.130801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.130836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.130962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.130994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.131186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.131219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.131349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.131380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.131584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.131618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.131734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.131765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.131946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.131978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.132170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.132203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.132305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.132337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.736 qpair failed and we were unable to recover it. 00:36:31.736 [2024-12-13 06:42:23.132549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.736 [2024-12-13 06:42:23.132582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.132802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.132834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.132942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.132974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.133173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.133213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.133341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.133373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.133584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.133616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.133831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.133864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.134037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.134069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.134204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.134236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.134501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.134535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.134806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.134838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.135033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.135065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.135253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.135285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.135525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.135559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.135798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.135831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.136010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.136041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.136294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.136326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.136521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.136554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.136668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.136700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.136816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.136849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.137090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.137122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.137363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.137395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.137518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.137551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.137740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.137771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.137945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.137977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.138155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.138188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.138429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.138467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.138650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.138682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.138918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.138950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.139132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.139163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.139408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.139440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.139720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.139752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.139970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.140009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.140278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.140309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.737 [2024-12-13 06:42:23.140455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.737 [2024-12-13 06:42:23.140488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.737 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.140595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.140627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.140878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.140910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.141198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.141230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.141516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.141550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.141815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.142047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.142079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.142322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.142355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.142531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.142564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.142798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.142834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.143026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.143059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.143230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.143262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.143465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.143498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.143764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.143795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.143981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.144013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.144264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.144295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.144484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.144516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.144714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.144746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.144851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.144882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.145119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.145151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.145320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.145353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.145618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.145651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.145753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.145785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.145990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.146210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.146433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.146670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.146807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.146965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.146996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.147183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.147214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.147384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.147415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.147605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.147637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.147828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.147859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.148050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.148082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.148296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.148328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.148477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.148510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.148649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.148682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.738 [2024-12-13 06:42:23.148878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.738 [2024-12-13 06:42:23.148910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.738 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.149043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.149075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.149311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.149343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.149578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.149611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.149857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.149888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.150874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.150905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.151168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.151199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.151385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.151423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.151603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.151637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.151840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.151871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.152044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.152076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.152258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.152289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.152486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.152518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.152703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.152737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.152914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.152945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.153152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.153183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.153423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.153463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.153658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.153689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.153817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.153848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.153949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.153980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.154157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.154189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.154380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.154411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.154541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.154574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.154840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.154872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.155010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.155041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.155161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.155193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.155429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.155472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.155662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.155693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.155865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.155897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.156012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.156044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.156238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.156269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.156469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.156501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.739 [2024-12-13 06:42:23.156696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.739 [2024-12-13 06:42:23.156727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.739 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.156849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.156880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.157138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.157175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.157344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.157376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.157489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.157521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.157641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.157672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.157852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.157883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.158079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.158286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.158462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.158663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.158814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.158987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.159019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.159200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.159231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.159404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.159435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.159577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.159609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.159746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.159778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.160020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.160051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.160298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.160329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.160441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.160482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.160657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.160688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.160819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.160850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.161029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.161060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.161166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.161198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.161439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.161481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.161620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.161651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.161861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.161893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.162162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.162193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.162462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.162494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.162765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.162798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.162930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.740 [2024-12-13 06:42:23.162961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.740 qpair failed and we were unable to recover it. 00:36:31.740 [2024-12-13 06:42:23.163086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.163117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.163321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.163353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.163474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.163507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.163668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.163699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.163880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.163912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.164080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.164111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.164283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.164314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.164418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.164464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.164589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.164621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.164788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.164819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.165001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.165033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.165268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.165305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.165507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.165540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.165637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.165669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.165801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.165832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.166005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.166036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.166304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.166336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.166528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.166560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.166685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.166716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.166912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.166944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.167179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.167210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.167342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.167374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.167558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.167591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.167766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.167798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.168923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.168954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.169159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.169191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.169306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.169338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.169519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.169551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.169724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.169756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.169911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.741 [2024-12-13 06:42:23.169943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.741 qpair failed and we were unable to recover it. 00:36:31.741 [2024-12-13 06:42:23.170182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.170213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.170396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.170428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.170572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.170604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.170739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.170772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.170885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.170916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.171036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.171067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.171258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.171289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.171492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.171525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.171784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.171815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.172019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.172050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.172221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.172253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.172437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.172496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.172752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.172785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.172925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.172956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.173140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.173171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.173433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.173474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.173652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.173690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.173862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.173893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.174016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.174048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.174183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.174214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.174459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.174491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.174753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.174784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.175029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.175061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.175252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.175283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.175560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.175593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.175723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.175755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.176019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.176050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.176266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.176297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.176570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.176603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.176776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.176807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.176947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.176979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.177093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.177125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.177382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.177414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.177546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.177578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.177756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.177787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.177906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.177937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.178183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.178214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.742 qpair failed and we were unable to recover it. 00:36:31.742 [2024-12-13 06:42:23.178465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.742 [2024-12-13 06:42:23.178497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.178736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.178768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.179016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.179047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.179248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.179279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.179403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.179434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.179623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.179656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.179784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.179816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.180007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.180038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.180223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.180255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.180495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.180529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.180719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.180750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.181012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.181044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.181247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.181278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.181468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.181501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.181741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.181774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.181883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.181914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.182155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.182186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.182373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.182404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.182644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.182676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.182807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.182845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.183049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.183248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.183418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.183646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.183799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.183978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.184010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.184191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.184222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.184397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.184428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.184647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.184679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.184791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.184822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.185032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.185063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.185272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.185303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.185480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.185513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.185699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.185731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.185909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.185941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.186046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.186077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.186211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.186243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.186359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.743 [2024-12-13 06:42:23.186391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.743 qpair failed and we were unable to recover it. 00:36:31.743 [2024-12-13 06:42:23.186633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.186665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.186834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.186865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.187957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.187988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.188181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.188214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.188487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.188520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.188692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.188724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.188963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.188994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.189201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.189232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.189350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.189382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.189493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.189525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.189650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.189681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.189924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.189955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.190150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.190182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.190312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.190343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.190498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.190760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.190791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.190977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.191014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.191186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.191217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.191416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.191454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.191623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.191654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.191895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.191926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.192110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.192141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.192326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.192471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.192502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.192708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.192740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.744 qpair failed and we were unable to recover it. 00:36:31.744 [2024-12-13 06:42:23.192923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.744 [2024-12-13 06:42:23.192954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.193255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.193286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.193419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.193458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.193647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.193678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.193872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.193904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.194091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.194122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.194294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.194325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.194431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.194470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.194600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.194631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.194818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.194849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.195114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.195145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.195379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.195410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.195586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.195619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.195830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.195862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.196138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.196169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.196378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.196409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.196672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.196704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.196957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.196988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.197175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.197207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.197389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.197420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.197622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.197654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.197940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.197971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.198161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.198192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.198391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.198422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.198559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.198592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.198778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.198809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.198994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.199026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.199289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.199320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.199455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.199487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.199609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.199641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.199891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.199922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.200191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.200228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.200483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.200516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.200706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.200737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.200863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.200894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.745 qpair failed and we were unable to recover it. 00:36:31.745 [2024-12-13 06:42:23.201079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.745 [2024-12-13 06:42:23.201110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.201369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.201400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.201525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.201556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.201811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.201842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.202012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.202044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.202309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.202340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.202554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.202587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.202762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.202793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.202900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.202931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.203106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.203138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.203428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.203467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.203649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.203680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.203848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.203879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.204137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.204168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.204478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.204510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.204627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.204658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.204825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.204856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.205119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.205150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.205375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.205407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.205519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.205556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.205729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.205760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.205880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.205912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.206154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.206185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.206468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.206501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.206673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.206705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.206895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.206926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.207163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.207195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.207462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.207494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.207687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.207719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.207916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.207947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.208068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.208099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.208223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.208255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.208374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.208405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.208621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.208846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.208878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.209088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.209119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.209244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.209282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.746 [2024-12-13 06:42:23.209469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.746 [2024-12-13 06:42:23.209501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.746 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.209760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.209792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.209979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.210128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.210279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.210514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.210740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.210943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.210974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.211160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.211192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.211403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.211435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.211616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.211647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.211820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.211851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.212092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.212123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.212395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.212427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.212583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.212615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.212896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.212927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.213053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.213085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.213281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.213312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.213482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.213515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.213704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.213735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.213861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.213893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.214076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.214107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.214226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.214257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.214518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.214550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.214670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.214701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.214877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.214908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.215166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.215198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.215413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.215443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.215632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.215664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.215851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.215883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.216083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.216114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.216299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.216330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.216569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.216602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.216719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.216750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.216944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.216976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.747 [2024-12-13 06:42:23.217179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.747 [2024-12-13 06:42:23.217210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.747 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.217454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.217486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.217626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.217657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.217838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.217869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.218049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.218086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.218217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.218248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.218446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.218487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.218603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.218634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.218770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.218802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.219063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.219094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.219209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.219240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.219366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.219397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.219658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.219690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.219885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.219917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.220110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.220142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.220346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.220377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.220565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.220611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.220799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.220830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.221010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.221041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.221244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.221275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.221522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.221708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.221740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.222009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.222040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.222222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.222253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.222369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.222401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.222666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.222698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.222974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.223005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.223187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.223219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.223357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.223388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.223596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.223754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.223786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.223839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d6c70 (9): Bad file descriptor 00:36:31.748 [2024-12-13 06:42:23.224255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.224328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.224497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.224535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.224747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.224780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.748 qpair failed and we were unable to recover it. 00:36:31.748 [2024-12-13 06:42:23.224997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.748 [2024-12-13 06:42:23.225030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.225218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.225250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.225360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.225393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.225668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.225892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.225925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.226117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.226149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.226417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.226455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.226640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.226672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.226850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.226882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.227134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.227165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.227363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.227396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.227603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.227636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.227903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.227935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.228113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.228145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.228395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.228427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.228626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.228659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.228844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.228876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.229139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.229171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.229304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.229336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.229576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.229609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.229842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.229875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.230152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.230183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.230304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.230336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.230611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.230652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.230870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.231077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.231234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.231266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.231460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.231493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.231628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.231659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.231920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.231951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.232076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.232108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.749 [2024-12-13 06:42:23.232285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.749 [2024-12-13 06:42:23.232317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.749 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.232439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.232485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.232677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.232709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.232834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.232866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.233119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.233151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.233414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.233446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.233676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.233709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.233894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.233926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.234061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.234093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.234357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.234389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.234531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.234563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.234753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.234784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.235044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.235076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.235199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.235231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.235475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.235507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.235625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.235656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.235917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.235949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.236073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.236105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.236407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.236439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.236598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.236631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.236771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.236802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.236915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.236947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.237130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.237162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.237346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.237379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.237592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.237625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.237815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.237848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.238041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.238073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.238180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.238211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.238332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.238364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.238612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.238645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.238850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.238882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.239172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.239204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.239442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.239490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.239762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.239793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.240048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.750 [2024-12-13 06:42:23.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.750 qpair failed and we were unable to recover it. 00:36:31.750 [2024-12-13 06:42:23.240277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.240308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.240519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.240553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.240806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.240838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.241045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.241076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.241335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.241367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.241480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.241514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.241754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.241785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.241909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.241940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.242122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.242154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.242362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.242393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.242710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.242742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.242878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.242911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.243051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.243082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.243208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.243240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.243433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.243474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.243579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.243610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.243816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.243848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.244045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.244077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.244322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.244353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.244540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.244573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.244818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.244850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.244983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.245015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.245222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.245253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.245518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.245551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.245696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.245729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.245972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.246004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.246118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.246149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.246333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.246364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.246608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.246881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.246912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.247116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.247148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.247353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.247385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.247508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.247549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.247678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.247710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.247830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.247862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.248121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.248153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.751 [2024-12-13 06:42:23.248413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.751 [2024-12-13 06:42:23.248444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.751 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.248696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.248734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.248839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.248871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.249122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.249153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.249399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.249430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.249708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.249741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.250002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.250034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.250158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.250190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.250395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.250427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.250623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.250655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.250847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.250878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.251055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.251087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.251209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.251240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.251364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.251396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.251662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.251695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.251874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.251906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.252026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.252058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.252179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.252210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.252431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.252481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.252676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.252709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.252832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.252863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.253141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.253173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.253345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.253377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.253623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.253656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.253918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.253950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.254218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.254413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.254445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.254643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.254675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.254817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.254849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.255126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.255286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.255498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.255650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.255871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.255992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.256024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.256232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.256264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.256499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.256532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.256727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.752 [2024-12-13 06:42:23.256758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.752 qpair failed and we were unable to recover it. 00:36:31.752 [2024-12-13 06:42:23.256869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.256901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.257072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.257104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.257280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.257311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.257496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.257536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.257725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.257757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.257934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.257965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.258079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.258111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.258373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.258405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.258605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.258638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.258819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.258851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.259094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.259125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.259232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.259263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.259504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.259536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.259725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.259756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.259952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.259983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.260167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.260198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.260389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.260420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.260580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.260613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.260768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.260887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.260919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.261025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.261056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.261225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.261256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.261446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.261490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.261672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.261703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.261884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.261916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.262019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.262051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.262232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.262263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.262476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.262509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.262687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.262719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.262917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.262948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.263158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.263191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.263297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.263329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.263599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.263631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.263752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.263784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.264045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.264077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.753 qpair failed and we were unable to recover it. 00:36:31.753 [2024-12-13 06:42:23.264257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.753 [2024-12-13 06:42:23.264288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.264469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.264502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.264706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.264737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.264975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.265006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.265289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.265320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.265520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.265552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.265738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.265769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.265941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.265972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.266098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.266135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.266261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.266292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.266410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.266442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.266694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.266726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.266938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.266970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.267140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.267172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.267386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.267417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.267552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.267584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.267788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.267820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.267995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.268026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.268198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.268230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.268497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.268530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.268774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.268806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.269090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.269122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.269321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.269354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.269629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.269662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.269905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.269936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.270143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.270174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.270355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.270387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.270523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.270555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.270842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.270873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.271057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.271089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.271262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.271293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.754 qpair failed and we were unable to recover it. 00:36:31.754 [2024-12-13 06:42:23.271465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.754 [2024-12-13 06:42:23.271497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.271691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.271723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.271913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.271944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.272127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.272158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.272268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.272301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.272509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.272542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.272789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.272821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.272943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.272974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.273169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.273200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.273384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.273416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.273663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.273696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.273941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.273972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.274163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.274194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.274392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.274423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.274635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.274667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.274903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.274934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.275118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.275149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.275350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.275387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.275664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.275696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.275892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.275923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.276109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.276141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.276400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.276431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.276626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.276658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.276848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.276879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.277007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.277039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.277228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.277259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.277470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.277502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.277674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.277706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.277971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.278003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.278193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.278225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.278465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.278497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.278743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.278775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.278960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.278991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.279183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.755 [2024-12-13 06:42:23.279214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.755 qpair failed and we were unable to recover it. 00:36:31.755 [2024-12-13 06:42:23.279345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.279377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.279616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.279648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.279780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.279812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.279934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.279966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.280206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.280237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.280361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.280393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.280610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.280642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.280840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.280872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.281112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.281143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.281313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.281345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.281612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.281646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.281779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.281811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.281931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.281963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.282169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.282201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.282374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.282405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.282653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.282685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.282876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.282908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.283098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.283129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.283309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.283340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.283582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.283614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.283856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.283887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.284001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.284033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.284150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.284182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.284370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.284406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.284545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.284578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.284855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.284887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.285092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.285305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.285337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.285593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.285627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.285891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.285923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.286054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.286336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.286368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.286559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.286591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.286852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.286884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.287079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.287111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.287368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.287400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.287521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.287553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.287745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.287777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.287962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.287994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.288185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.756 [2024-12-13 06:42:23.288217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.756 qpair failed and we were unable to recover it. 00:36:31.756 [2024-12-13 06:42:23.288394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.288425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.288635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.288668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.288908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.288939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.289061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.289092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.289350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.289381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.289571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.289604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.289871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.289902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.290019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.290051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.290250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.290281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.290474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.290507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.290751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.290783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.290915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.290948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.291060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.291092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.291205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.291237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.291489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.291522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.291707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.291738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.291920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.291951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.292213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.292245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.292354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.292386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.292561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.292594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.292771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.292802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.292913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.292945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.293210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.293242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.293430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.293479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.293739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.293771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.293952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.293983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.294163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.294195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.294398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.294430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.294546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.294578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.294747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.294779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.294968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.295000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.295135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.295167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.295343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.295375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.295642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.295675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.295959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.295991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.296160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.296191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.296380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.296411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.296689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.757 [2024-12-13 06:42:23.296721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.757 qpair failed and we were unable to recover it. 00:36:31.757 [2024-12-13 06:42:23.296966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.296998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.297244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.297275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.297541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.297573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.297695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.297727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.297993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.298024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.298216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.298248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.298381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.298413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.298662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.298694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.298865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.298896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.299030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.299061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.299247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.299279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.299404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.299436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.299759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.299830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.300098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.300168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.300322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.300358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.300489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.300522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.300693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.300725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.300932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.300965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.301150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.301182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.301360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.301392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.301524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.301557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.301727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.301759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.301938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.301969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.302192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.302224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.302420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.302460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.302602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.302643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.302782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.302813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.303075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.303106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.303213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.303245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.303433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.303478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.303691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.303723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.303909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.303941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.304049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.304080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.304265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.304297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.304492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.304525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.304706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.304737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.304929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.304960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.305083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.305115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.305298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.305328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.305464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.758 [2024-12-13 06:42:23.305497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.758 qpair failed and we were unable to recover it. 00:36:31.758 [2024-12-13 06:42:23.305701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.305733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.305850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.305881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.306081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.306113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.306296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.306327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.306510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.306543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.306734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.306765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.306937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.306969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.307109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.307140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.307378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.307409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.307594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.307626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.307796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.307828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.308000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.308032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.308210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.308242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.308422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.308465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.308730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.308762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.308884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.308915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.309097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.309129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.309305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.309337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.309474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.309507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.309693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.309724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.309933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.309965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.310100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.310131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.310370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.310401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.310613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.310645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.310904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.310935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.311115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.311152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.311302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.311334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.311597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.311630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.311766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.311797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.312026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.312057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.759 [2024-12-13 06:42:23.312242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.759 [2024-12-13 06:42:23.312273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.759 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.312512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.312545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.312727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.312758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.313007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.313039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.313142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.313173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.313470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.313502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.313701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.313733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.313996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.314027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.314216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.314247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.314512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.314545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.314820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.314852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.314982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.315014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.315273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.315304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.315475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.315508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.315769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.315800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.315988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.316020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.316144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.316175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.316310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.316341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.316577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.316610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.316810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.316841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.317025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.317057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.317319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.317351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.317608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.317643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.317759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.317791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.318054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.318085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.318297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.318328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.318504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.318536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.318741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.318773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.319037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.319069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.319196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.319227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.319344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.319376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.319487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.319519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.319783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.319814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.320002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.320033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.320274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.320305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.320490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.320523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.320675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.320706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.320910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.320941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.321123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.321155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.321341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.760 [2024-12-13 06:42:23.321372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.760 qpair failed and we were unable to recover it. 00:36:31.760 [2024-12-13 06:42:23.321558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.321591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.321769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.321800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.321981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.322012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.322274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.322306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.322567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.322600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.322734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.322765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.323011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.323042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.323253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.323284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.323423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.323463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.323582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.323614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.323809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.324024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.324055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.324243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.324275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.324467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.324511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.324699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.324731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.324916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.324948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.325066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.325097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.325220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.325252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.325375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.325407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.325614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.325647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.325847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.325879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.326035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.326204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.326345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.326584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.326876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.326991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.327022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.327285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.327317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.327507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.327540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.327791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.327822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.327998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.328030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.328318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.328349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.328590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.328629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.328802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.328834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.329111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.329143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.329411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.329442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.329647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.329680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.329923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.329955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.330065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.330096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.330208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.330240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.330476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.761 [2024-12-13 06:42:23.330509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.761 qpair failed and we were unable to recover it. 00:36:31.761 [2024-12-13 06:42:23.330621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.330651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.330872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.330903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.331020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.331051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.331229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.331260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.331403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.331434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.331691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.331722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.331836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.331868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.332072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.332104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.332348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.332379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.332645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.332679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.332863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.332894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.333013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.333045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.333225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.333256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.333468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.333500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.333711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.333743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.333963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.334139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.334171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.334286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.334317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.334557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.334589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.334768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.334799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.334980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.335011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.335215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.335251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.335423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.335463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.335713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.335746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.335877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.335909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.336082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.336113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.336282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.336313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.336500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.336532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.336794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.336826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.336932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.336964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.337105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.337136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.337322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.337353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.337629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.337660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.337850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.337882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.338082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.338114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.338251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.338283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.338526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.338558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.338669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.338701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.338937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.338969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.339075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.339107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.762 [2024-12-13 06:42:23.339225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.762 [2024-12-13 06:42:23.339256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.762 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.339496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.339528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.339717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.339749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.339941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.339973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.340103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.340134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.340251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.340283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.340468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.340501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.340754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.340786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.340968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.340999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.341242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.341274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.341489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.341522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.341770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.341802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.341975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.342185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.342401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.342563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.342769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.342934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.342965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.343152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.343184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.343371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.343403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.343579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.343611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.343796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.343833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.343969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.344198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.344340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.344608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.344778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.344942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.344974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.345096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.345128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.345368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.345400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.345652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.345685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.345809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.345841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.346085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.346117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.346355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.346388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.346524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.346556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.346682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.346715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.346913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.346945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.763 [2024-12-13 06:42:23.347071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.763 [2024-12-13 06:42:23.347103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.763 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.347213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.347244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.347508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.347541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.347805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.347836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.348079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.348110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.348308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.348340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.348460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.348492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.348686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.348717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.348953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.348984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.349164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.349196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.349311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.349342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.349548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.349582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.349819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.349851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.349972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.350003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.350188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.350220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.350503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.350536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.350798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.350830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.351045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.351076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.351285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.351316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.351496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.351529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.351795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.351827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.352013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.352044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.352276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.352308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.352516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.352549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.352669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.352706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.352898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.352930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.353118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.353151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.353412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.353444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.353666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.353698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.353806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.353838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.353970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.354004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.354188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.354220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.354330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.354362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.354541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.354574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.354817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.354848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.355108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.355141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.355266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.355297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.355567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.355599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.355800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.355831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.355943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.355975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.356092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.356123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.356360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.356392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.356602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.356635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.356811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.356842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.357017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.357049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.357173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.764 [2024-12-13 06:42:23.357206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.764 qpair failed and we were unable to recover it. 00:36:31.764 [2024-12-13 06:42:23.357311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.357342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.357469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.357502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.357674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.357706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.357884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.357915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.358224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.358256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.358388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.358420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.358612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:31.765 [2024-12-13 06:42:23.358643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:31.765 qpair failed and we were unable to recover it. 00:36:31.765 [2024-12-13 06:42:23.358898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.358930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.359208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.359239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.359446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.359506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.359677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.359709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.359947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.359978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.360096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.360128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.360313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.360346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.360471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.360505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.360681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.360714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.360894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.360926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.361052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.361083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.361370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.361413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.361618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.361652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.361855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.361886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.362967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.362999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.363105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.363136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.363305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.363336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.363509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.363542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.363653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.363686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.363873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.363905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.364046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.364078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.364317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.364349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.364536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.364569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.364761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.364793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.364923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.364956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.046 qpair failed and we were unable to recover it. 00:36:32.046 [2024-12-13 06:42:23.365242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.046 [2024-12-13 06:42:23.365273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.365467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.365499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.365635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.365667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.365839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.365871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.366072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.366224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.366371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.366581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.366813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.366984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.367016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.367191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.367224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.367478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.367511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.367634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.367664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.367901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.367933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.368131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.368162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.368415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.368446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.368728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.368760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.368997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.369255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.369401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.369564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.369772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.369925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.369958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.370154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.370186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.370394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.370542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.370577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.370753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.370785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.370902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.370933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.371103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.371135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.371392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.371424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.371622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.371655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.371834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.371867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.372086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.372304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.372511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.372720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.372869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.372999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.373031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.373213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.047 [2024-12-13 06:42:23.373245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.047 qpair failed and we were unable to recover it. 00:36:32.047 [2024-12-13 06:42:23.373429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.373468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.373719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.373750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.374031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.374064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.374246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.374277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.374521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.374553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.374736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.374768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.374941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.374974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.375146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.375179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.375360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.375392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.375512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.375784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.375816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.375947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.375978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.376107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.376139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.376431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.376485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.376616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.376648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.376837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.376870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.377004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.377035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.377276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.377307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.377486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.377519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.377714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.377746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.377921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.377954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.378137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.378168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.378360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.378397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.378585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.378617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.378803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.378835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.379036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.379067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.379333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.379365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.379593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.379626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.379869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.379900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.380090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.380123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.380291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.380324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.380523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.380556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.380756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.380789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.381043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.381075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.381362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.381393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.381575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.381607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.381793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.381825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.381961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.382099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.048 [2024-12-13 06:42:23.382131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.048 qpair failed and we were unable to recover it. 00:36:32.048 [2024-12-13 06:42:23.382253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.382285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.382416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.382458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.382632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.382666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.382793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.382825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.383062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.383094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.383268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.383302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.383428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.383664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.383697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.383883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.383914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.384047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.384080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.384259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.384292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.384491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.384523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.384648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.384679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.384862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.384894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.385009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.385041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.385223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.385255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.385388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.385420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.385611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.385643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.385884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.385916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.386039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.386071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.386267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.386301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.386481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.386514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.386703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.386734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.386902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.386942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.387960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.387992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.388099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.388130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.388392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.388424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.388554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.388586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.388836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.388868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.389072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.389103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.389209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.389241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.389354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.389386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.389592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.389624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.049 [2024-12-13 06:42:23.389794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.049 [2024-12-13 06:42:23.389825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.049 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.389995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.390028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.390220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.390253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.390432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.390475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.390769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.390801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.391043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.391074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.391202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.391234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.391427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.391469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.391670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.391701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.391965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.392159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.392190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.392399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.392430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.392757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.392790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.392961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.392993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.393124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.393155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.393345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.393378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.393497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.393530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.393746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.393778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.393954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.393985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.394096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.394127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.394305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.394337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.394532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.394565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.394747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.394779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.394971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.395117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.395329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.395489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.395698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.395916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.395948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.396118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.396149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.396282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.396315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.396495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.396528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.050 [2024-12-13 06:42:23.396767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.050 [2024-12-13 06:42:23.396799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.050 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.396915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.396946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.397123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.397154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.397265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.397296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.397518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.397752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.397783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.397970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.398002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.398121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.398153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.398342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.398374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.398565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.398598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.398777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.398809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.399008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.399040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.399226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.399256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.399456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.399488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.399675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.399706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.399836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.399868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.400040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.400073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.400277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.400309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.400558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.400591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.400704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.400735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.400980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.401014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.401137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.401169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.401360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.401391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.401640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.401673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.401890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.401922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.402100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.402132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.402247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.402278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.402458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.402490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.402666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.402698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.402883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.402914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.403089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.403120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.403300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.403332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.403540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.403574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.403750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.403793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.403922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.403956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.404076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.404108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.404294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.404325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.404441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.404482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.404766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.404798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.051 [2024-12-13 06:42:23.405041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.051 [2024-12-13 06:42:23.405072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.051 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.405192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.405223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.405468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.405501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.405776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.405807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.405998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.406155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.406298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.406568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.406728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.406901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.406934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.407116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.407147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.407337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.407368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.407606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.407639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.407770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.407802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.408006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.408037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.408301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.408332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.408507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.408540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.408726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.408757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.409009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.409041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.409224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.409257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.409516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.409549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.409751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.409783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.409960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.409991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.410163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.410195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.410379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.410410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.410660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.410693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.410932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.410964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.411147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.411179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.411300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.411333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.411575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.411608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.411719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.411750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.411937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.411968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.412140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.412172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.412371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.412404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.412537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.412574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.412759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.412791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.412976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.413008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.413182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.413214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.413397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.052 [2024-12-13 06:42:23.413429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.052 qpair failed and we were unable to recover it. 00:36:32.052 [2024-12-13 06:42:23.413572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.413604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.413816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.413847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.414109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.414141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.414259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.414290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.414482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.414515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.414642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.414674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.414943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.414975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.415149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.415181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.415290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.415321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.415490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.415524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.415773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.415804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.415981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.416144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.416368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.416524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.416739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.416949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.416980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.417182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.417215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.417403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.417434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.417683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.417716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.417887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.417919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.418032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.418064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.418286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.418357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.418492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.418530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.418711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.418743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.418876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.418912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.419156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.419188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.419359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.419390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.419642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.419675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.419954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.419986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.420106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.420137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.420400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.420431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.420685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.420718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.420835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.420866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.420986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.421120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.421280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.421461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.421612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.053 qpair failed and we were unable to recover it. 00:36:32.053 [2024-12-13 06:42:23.421763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.053 [2024-12-13 06:42:23.421795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.422041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.422074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.422189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.422224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.422475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.422510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.422702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.422734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.422904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.422938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.423140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.423172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.423394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.423428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.423567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.423600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.423812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.423844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.424023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.424056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.424161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.424192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.424375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.424411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.424542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.424579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.424864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.424894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.425087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.425119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.425303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.425336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.425512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.425549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.425769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.425803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.426020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.426052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.426237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.426269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.426394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.426427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.426686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.426757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.427097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.427278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.427512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.427656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.427792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.427978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.428011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.428195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.428227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.428549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.428585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.428706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.428739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.428841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.428873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.428978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.429010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.429185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.429218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.429394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.429427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.429630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.429663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.429799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.429831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.430077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.430110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.054 [2024-12-13 06:42:23.430294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.054 [2024-12-13 06:42:23.430326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.054 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.430504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.430539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.430655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.430686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.430969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.431121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.431398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.431563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.431708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.431927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.431960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.432135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.432168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.432286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.432318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.432518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.432557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.432740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.432773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.432944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.432977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.433150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.433183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.433394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.433426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.433703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.433737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.433928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.433961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.434153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.434185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.434462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.434496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.434608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.434641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.434773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.434805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.434932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.434965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.435169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.435201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.435323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.435354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.435476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.435511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.435701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.435734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.435919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.435951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.436070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.436104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.436295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.436327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.436440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.436480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.436671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.436704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.436820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.436852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.437029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.437060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.437230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.437262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.055 qpair failed and we were unable to recover it. 00:36:32.055 [2024-12-13 06:42:23.437461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.055 [2024-12-13 06:42:23.437494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.437623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.437655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.437827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.437859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.438056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.438095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.438268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.438300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.438600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.438633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.438822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.438854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.439076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.439108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.439229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.439262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.439474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.439507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.439700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.439733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.439916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.439948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.440052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.440084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.440204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.440236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.440466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.440642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.440674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.440866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.440898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.441087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.441120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.441235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.441268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.441486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.441520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.441710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.441742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.441919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.441951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.442216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.442248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.442431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.442473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.442662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.442694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.442799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.442831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.443874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.443907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.444148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.444180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.444310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.444343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.444543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.444578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.444705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.444738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.444965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.444997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.445099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.445131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.445332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.445365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.445490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.056 [2024-12-13 06:42:23.445523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.056 qpair failed and we were unable to recover it. 00:36:32.056 [2024-12-13 06:42:23.445740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.445773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.445907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.445939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.446127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.446160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.446339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.446372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.446571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.446605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.446786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.446818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.447088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.447121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.447333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.447365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.447567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.447602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.447718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.447751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.447922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.447954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.448190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.448224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.448463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.448496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.448640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.448672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.448785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.448817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.449059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.449091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.449204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.449236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.449423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.449478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.449659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.449692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.449929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.449960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.450962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.450994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.451204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.451237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.451477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.451511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.451688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.451720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.451984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.452015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.452215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.452247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.452443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.452490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.452698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.452731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.452917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.452950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.453127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.453158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.453330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.453362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.453601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.453634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.453820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.453852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.454029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.057 [2024-12-13 06:42:23.454061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.057 qpair failed and we were unable to recover it. 00:36:32.057 [2024-12-13 06:42:23.454244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.454275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.454387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.454420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.454641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.454675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.454862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.454894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.455088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.455120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.455386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.455418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.455571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.455604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.455801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.455833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.456022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.456054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.456262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.456294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.456474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.456508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.456721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.456753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.456933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.456965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.457949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.457981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.458161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.458199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.458496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.458529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.458731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.458762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.459030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.459062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.459254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.459286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.459409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.459441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.459647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.459680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.459962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.459994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.460119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.460150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.460267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.460299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.460487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.460520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.460656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.460688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.460900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.460931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.461961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.461993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.462112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.462145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.058 [2024-12-13 06:42:23.462254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.058 [2024-12-13 06:42:23.462286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.058 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.462410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.462442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.462666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.462698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.462888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.462921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.463027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.463060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.463250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.463281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.463480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.463514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.463700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.463738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.463852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.463883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.464087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.464120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.464218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.464248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.464422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.464465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.464709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.464741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.464975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.465007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.465202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.465234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.465441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.465484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.465709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.466024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.466057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.466194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.466226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.466408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.466440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.466711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.466744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.466926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.466998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.467216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.467251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.467535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.467572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.467761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.467793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.467917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.467949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.468931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.468963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.469202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.469234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.469474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.469508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.059 [2024-12-13 06:42:23.469709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.059 [2024-12-13 06:42:23.469750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.059 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.469995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.470027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.470142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.470174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.470441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.470495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.470690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.470721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.470854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.470887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.471151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.471185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.471395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.471426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.471557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.471593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.471781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.471813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.471994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.472026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.472232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.472265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.472385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.472417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.472549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.472582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.472768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.472801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.472972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.473004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.473212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.473245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.473424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.473464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.473592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.473624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.473817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.473849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.474033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.474065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.474247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.474280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.474522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.474555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.474737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.474769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.474972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.475107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.475242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.475386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.475551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.475820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.475853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.476854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.476886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.477168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.477200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.477383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.477415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.477657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.477693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.477886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.060 [2024-12-13 06:42:23.477919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.060 qpair failed and we were unable to recover it. 00:36:32.060 [2024-12-13 06:42:23.478035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.478066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.478248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.478280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.478460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.478494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.478620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.478651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.478791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.478822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.479938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.479971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.480094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.480127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.480234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.480266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.480384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.480416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.480713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.480794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.480935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.480970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.481152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.481185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.481320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.481353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.481576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.481611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.481863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.481896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.482087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.482119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.482231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.482262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.482534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.482568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.482697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.482729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.482841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.482872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.483132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.483163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.483264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.483296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.483488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.483520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.483668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.483701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.483897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.483928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.484110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.484143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.484315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.484346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.484529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.484564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.484746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.484967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.484999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.485105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.485137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.485250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.485281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.485579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.061 [2024-12-13 06:42:23.485613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.061 qpair failed and we were unable to recover it. 00:36:32.061 [2024-12-13 06:42:23.485802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.485833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.485955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.485987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.486252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.486284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.486537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.486571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.486816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.486848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.487111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.487145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.487321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.487354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.487542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.487576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.487714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.487747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.488020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.488051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.488181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.488227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.488498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.488598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.488983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.489059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.489337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.489370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.489561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.489596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.489715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.489748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.489942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.489982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.490172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.490204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.490384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.490415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.490543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.490578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.490717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.490748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.490936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.490967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.491097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.491129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.491408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.491439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.491725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.491759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.491893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.491924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.492106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.492138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.492257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.492289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.492532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.492566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.492686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.492721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.492846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.492878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.493055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.493087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.493296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.493327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.493441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.493482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.493673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.493705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.493894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.493931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.494125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.494157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.494274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.494305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.494483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.062 [2024-12-13 06:42:23.494515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.062 qpair failed and we were unable to recover it. 00:36:32.062 [2024-12-13 06:42:23.494648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.494680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.494918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.494951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.495132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.495163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.495284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.495315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.495517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.495550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.495680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.495711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.496021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.496052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.496245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.496278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.496411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.496442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.496593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.496625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.496797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.496830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.497937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.497968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.498079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.498117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.498236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.498271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.498508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.498540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.498662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.498693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.498869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.498900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.499074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.499105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.499288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.499325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.499446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.499487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.499674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.499707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.499883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.499915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.500045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.500076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.500250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.500281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.500535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.500569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.500676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.500707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.500955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.500987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.501122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.501153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.501265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.501296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.501538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.501571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.501755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.501786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.502023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.502054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.502314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.502345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.502484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.063 [2024-12-13 06:42:23.502517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.063 qpair failed and we were unable to recover it. 00:36:32.063 [2024-12-13 06:42:23.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.502809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.502931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.502963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.503151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.503183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.503373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.503405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.503592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.503624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.503824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.503857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.503976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.504008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.504201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.504231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.504371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.504404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.504670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.504703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.504824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.504855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.505040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.505073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.505353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.505384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.505561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.505593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.505707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.505738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.505927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.505960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.506157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.506189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.506381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.506414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.506674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.506712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.506896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.506928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.507046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.507081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.507263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.507294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.507537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.507571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.507830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.507861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.507994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.508025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.508211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.508242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.508350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.508381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.508674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.508707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.508875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.508909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.509174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.509206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.509463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.509505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.509689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.509719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.509913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.509943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.510196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.510228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.064 qpair failed and we were unable to recover it. 00:36:32.064 [2024-12-13 06:42:23.510469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.064 [2024-12-13 06:42:23.510505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.510775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.510808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.510918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.510949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.511082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.511112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.511240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.511271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.511463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.511499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.511608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.511639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.511816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.511850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.512119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.512272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.512309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.512436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.512493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.512700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.512742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.512917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.512948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.513121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.513152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.513267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.513299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.513431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.513480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.513659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.513700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.513881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.513915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.514184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.514215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.514392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.514424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.514626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.514658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.514839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.514874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.515017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.515256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.515287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.515466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.515506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.515750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.515782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.516065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.516100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.516284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.516316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.516494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.516527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.516702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.516734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.516913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.516946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.517154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.517188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.517360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.517393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.517529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.517562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.517750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.517782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.517892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.517924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.518110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.518145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.518283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.518316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.518500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.518533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.065 qpair failed and we were unable to recover it. 00:36:32.065 [2024-12-13 06:42:23.518661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.065 [2024-12-13 06:42:23.518693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.518883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.518913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.519106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.519138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.519267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.519299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.519546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.519582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.519721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.519752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.519871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.519903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.520139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.520172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.520368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.520403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.520612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.520646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.520772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.520804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.520988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.521857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.521979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.522011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.522137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.522420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.522464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.522655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.522694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.522933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.522964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.523152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.523182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.523291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.523322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.523512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.523554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.523690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.523730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.523981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.524127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.524275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.524481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.524633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.524858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.524891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.525041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.525254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.525410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.525572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.525718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.525993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.526029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.526216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.526249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.066 [2024-12-13 06:42:23.526360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.066 [2024-12-13 06:42:23.526392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.066 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.526581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.526613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.526743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.526954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.526989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.527173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.527207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.527380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.527413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.527644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.527834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.527867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.527992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.528026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.528204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.528239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.528516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.528550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.528755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.528787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.528981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.529015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.529324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.529360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.529475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.529508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.529641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.529674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.529786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.529818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.530086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.530119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.530232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.530264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.530396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.530427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.530653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.530685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.530801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.530832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.531091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.531123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.531359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.531393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.531582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.531615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.531785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.531817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.532011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.532050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.532271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.532302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.532423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.532479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.532739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.532771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.533057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.533089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.533223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.533254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.533433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.533477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.533724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.533756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.533924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.533955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.534191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.534222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.534400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.534432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.534683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.534714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.534894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.534925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.067 [2024-12-13 06:42:23.535181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.067 [2024-12-13 06:42:23.535213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.067 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.535428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.535472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.535712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.535743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.535876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.535908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.536079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.536111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.536214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.536245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.536479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.536512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.536619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.536651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.536857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.537117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.537149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.537317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.537349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.537480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.537513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.537691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.537723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.537894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.537926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.538107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.538182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.538378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.538414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.538688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.538722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.539011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.539042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.539168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.539199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.539380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.539412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.539642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.539675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.539884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.539916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.540026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.540057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.540193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.540225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.540351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.540382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.540634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.540667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.540886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.541062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.541103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.541298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.541329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.541514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.541546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.541742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.541773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.541968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.541999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.542173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.542204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.542375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.542406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.542618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.068 [2024-12-13 06:42:23.542651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.068 qpair failed and we were unable to recover it. 00:36:32.068 [2024-12-13 06:42:23.542889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.542920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.543102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.543134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.543316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.543531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.543563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.543813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.543844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.544110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.544142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.544284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.544316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.544561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.544594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.544700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.544731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.544975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.545007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.545138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.545169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.545287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.545318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.545564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.545597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.545809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.545840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.546125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.546156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.546328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.546359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.546488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.546519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.546790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.546822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.546924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.546956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.547153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.547185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.547378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.547409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.547607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.547638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.547762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.547792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.547933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.547964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.548145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.548175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.548371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.548402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.548529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.548561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.548767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.548797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.548976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.549007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.549130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.549162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.549352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.549382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.549506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.549540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.549801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.549838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.550022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.550053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.550243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.550275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.550467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.550499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.550617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.550648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.550835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.550866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.551000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.069 [2024-12-13 06:42:23.551031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.069 qpair failed and we were unable to recover it. 00:36:32.069 [2024-12-13 06:42:23.551218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.551250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.551421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.551459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.551567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.551598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.551772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.551803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.551970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.552175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.552336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.552546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.552720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.552921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.552952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.553085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.553116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.553309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.553339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.553474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.553506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.553745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.553776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.553977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.554145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.554298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.554518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.554726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.554958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.554989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.555230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.555261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.555516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.555547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.555737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.555768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.555886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.555917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.556197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.556228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.556406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.556437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.556661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.556692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.556810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.556841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.557088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.557119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.557241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.557271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.557472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.557691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.557722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.557903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.557933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.558065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.558102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.558368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.558398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.558674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.558706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.558820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.558851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.559051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.559083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.559339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.070 [2024-12-13 06:42:23.559370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.070 qpair failed and we were unable to recover it. 00:36:32.070 [2024-12-13 06:42:23.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.559528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.559701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.559732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.559994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.560024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.560141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.560172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.560285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.560316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.560492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.560523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.560707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.560737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.560975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.561006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.561192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.561223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.561409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.561440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.561637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.561669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.561860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.561890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.562154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.562185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.562298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.562330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.562537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.562569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.562696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.562727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.562965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.562995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.563178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.563209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.563346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.563377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.563551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.563582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.563688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.563719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.564054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.564126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.564432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.564486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.564666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.564699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.564827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.564859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.565099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.565130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.565266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.565298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.565538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.565572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.565777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.565809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.566050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.566081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.566264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.566296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.566471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.566503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.566700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.566731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.566976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.567007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.567133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.567164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.567413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.567445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.567630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.567662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.567945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.567976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.071 [2024-12-13 06:42:23.568172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.071 [2024-12-13 06:42:23.568203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.071 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.568494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.568526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.568701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.568733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.568972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.569004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.569137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.569168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.569413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.569444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.569741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.569774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.569888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.569919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.570023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.570055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.570235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.570266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.570531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.570564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.570756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.570787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.570903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.570934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.571174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.571206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.571330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.571362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.571498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.571530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.571745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.571776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.571947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.571979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.572140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.572172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.572466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.572498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.572625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.572657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.572773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.572804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.573041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.573072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.573323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.573360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.573595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.573627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.573742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.573774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.573900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.573931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.574168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.574200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.574300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.574331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.574501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.574534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.574664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.574696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.574886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.574917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.575048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.575079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.575209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.575240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.575439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.575480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.575612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.575644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.072 qpair failed and we were unable to recover it. 00:36:32.072 [2024-12-13 06:42:23.575834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.072 [2024-12-13 06:42:23.575865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.576041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.576073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.576222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.576541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.576573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.576701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.576732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.576855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.576887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.577130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.577161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.577349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.577381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.577551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.577583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.577772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.577803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.577982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.578013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.578222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.578253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.578425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.578468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.578709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.578740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.578942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.578973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.579158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.579189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.579386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.579418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.579599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.579631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.579819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.579850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.580089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.580120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.580286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.580413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.580444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.580643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.580675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.580962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.580993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.581246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.581277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.581402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.581433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.581650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.581683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.581944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.581981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.582244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.582275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.582464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.582497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.582687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.582719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.583000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.583032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.583243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.583275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.583393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.583425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.583719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.583751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.583948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.583980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.584197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.584229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.584427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.584467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.073 [2024-12-13 06:42:23.584706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.073 [2024-12-13 06:42:23.584738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.073 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.584931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.584963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.585167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.585199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.585472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.585504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.585627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.585658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.585835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.585866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.586060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.586091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.586213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.586244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.586367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.586399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.586576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.586608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.586782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.586813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.587073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.587105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.587239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.587270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.587483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.587516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.587803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.587835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.587965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.587997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.588239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.588270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.588482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.588515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.588774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.588806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.589012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.589043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.589168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.589200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.589394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.589425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.589704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.589736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.589932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.589964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.590096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.590127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.590343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.590374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.590611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.590644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.590820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.590851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.590972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.591003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.591135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.591172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.591408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.591440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.591690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.591723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.591856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.591888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.592967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.592998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.074 [2024-12-13 06:42:23.593173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.074 [2024-12-13 06:42:23.593204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.074 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.593374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.593406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.593595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.593627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.593825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.593858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.594955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.594986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.595100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.595133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.595320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.595351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.595499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.595533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.595745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.595778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.595951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.595983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.596174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.596205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.596334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.596365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.596669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.596702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.596895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.596928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.597110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.597142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.597267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.597301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.597473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.597505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.597685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.597718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.597960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.597992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.598176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.598209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.598341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.598373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.598574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.598606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.598733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.598764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.598940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.598972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.599106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.599138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.599310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.599488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.599521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.599779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.599810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.600080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.600111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.600341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.600378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.600492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.600524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.600794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.600826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.601016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.601048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.601234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.075 [2024-12-13 06:42:23.601265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.075 qpair failed and we were unable to recover it. 00:36:32.075 [2024-12-13 06:42:23.601438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.601479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.601599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.601638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.601774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.601807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.601993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.602194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.602414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.602570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.602719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.602924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.602956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.603201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.603233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.603427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.603468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.603687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.603719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.603936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.603968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.604168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.604199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.604382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.604415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.604560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.604593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.604793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.604824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.605049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.605087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.605312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.605344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.605526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.605559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.605750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.605781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.606055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.606088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.606301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.606555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.606587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.606712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.606744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.606945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.606976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.607150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.607181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.607421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.607481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.607677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.607710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.607950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.607982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.608194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.608226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.608493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.608531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.608747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.608779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.608960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.608992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.609094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.609126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.609387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.609419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.609605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.609638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.609832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.609864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.610002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.610034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.076 qpair failed and we were unable to recover it. 00:36:32.076 [2024-12-13 06:42:23.610206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.076 [2024-12-13 06:42:23.610238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.610407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.610437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.610637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.610669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.610873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.610905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.611037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.611068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.611253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.611284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.611520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.611698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.611730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.611861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.611892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.612015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.612046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.612260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.612292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.612470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.612502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.612675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.612707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.612879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.612910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.613167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.613200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.613321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.613353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.613602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.613634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.613824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.613861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.613986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.614018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.614137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.614170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.614394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.614426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.614639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.614671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.614866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.614897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.615072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.615103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.615275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.615307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.615501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.615534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.615713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.615745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.615986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.616192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.616398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.616550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.616752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.616901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.616939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.617206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.617238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.077 [2024-12-13 06:42:23.617412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.077 [2024-12-13 06:42:23.617444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.077 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.617647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.617680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.617866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.617898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.618079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.618111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.618346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.618379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.618516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.618549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.618741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.618773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.618956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.618988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.619181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.619213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.619329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.619362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.619597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.619631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.619877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.619909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.620091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.620125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.620303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.620336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.620512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.620545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.620782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.620813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.620943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.620974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.621088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.621119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.621292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.621324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.621548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.621580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.621769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.621801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.621914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.621945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.622158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.622363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.622513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.622882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.622986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.623118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.623268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.623489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.623713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.623852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.623883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.624069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.624102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.624223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.624254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.624437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.624477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.624647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.624679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.624851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.624884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.625077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.625114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.078 [2024-12-13 06:42:23.625305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.078 [2024-12-13 06:42:23.625336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.078 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.625519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.625552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.625670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.625702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.625893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.625924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.626123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.626154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.626282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.626313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.626425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.626463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.626702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.626733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.626927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.626958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.627071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.627102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.627274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.627305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.627502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.627535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.627729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.627760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.627977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.628113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.628267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.628489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.628654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.628879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.628909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.629025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.629056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.629247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.629278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.629540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.629572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.629745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.629776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.629951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.629982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.630244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.630276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.630426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.630463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.630657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.630689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.630883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.630914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.631106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.631137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.631342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.631373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.631558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.631590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.631704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.631735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.631994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.632025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.632195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.632225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.632468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.632499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.632614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.632646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.632754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.632785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.632974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.633005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.633266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.633298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.079 [2024-12-13 06:42:23.633478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.079 [2024-12-13 06:42:23.633516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.079 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.633685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.633716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.633931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.633962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.634201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.634232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.634510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.634543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.634735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.634766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.634906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.634937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.635139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.635170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.635430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.635470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.635589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.635621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.635747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.635778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.635969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.636000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.636194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.636225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.636490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.636537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.636718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.636749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.636857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.636888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.637100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.637131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.637384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.637416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.637680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.637712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.637926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.638132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.638163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.638360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.638391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.638579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.638612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.638868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.638899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.639160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.639191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.639355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.639386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.639621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.639653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.639888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.639965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.640141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.640211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.640436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.640486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.640670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.640702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.640897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.640929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.641103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.641135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.641311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.641342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.641533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.641565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.641737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.641769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.642032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.642064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.642186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.642218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.642406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.642438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.080 [2024-12-13 06:42:23.642582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.080 [2024-12-13 06:42:23.642614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.080 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.642852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.642893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.643075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.643106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.643292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.643324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.643509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.643542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.643651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.643682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.643928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.643959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.644070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.644102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.644401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.644431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.644556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.644589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.644834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.644866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.644969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.645000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.645187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.645218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.645395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.645426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.645640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.645672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.645901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.646171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.646203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.646389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.646420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.646629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.646664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.646863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.646894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.647075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.647106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.647349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.647380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.647518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.647551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.647680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.647712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.647885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.647916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.648040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.648072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.648260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.648291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.648498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.648532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.648774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.648845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.649056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.649093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.649346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.649378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.649636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.649670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.649792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.649823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.650064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.650096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.650215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.081 [2024-12-13 06:42:23.650247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.081 qpair failed and we were unable to recover it. 00:36:32.081 [2024-12-13 06:42:23.650433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.650474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.650656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.650688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.650800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.650831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.651039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.651071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.651243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.651275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.651538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.651571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.651761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.651807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.651981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.652012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.652248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.652280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.652414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.652461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.652582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.652614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.652786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.652818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.653031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.653063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.653307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.653339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.653523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.653557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.653742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.653773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.653908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.653939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.654044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.654076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.654263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.654295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.654471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.654504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.654772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.654803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.655061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.655092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.655226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.655257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.655490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.655523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.655730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.655762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.656006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.656038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.656327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.656359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.656600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.656634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.656812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.656844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.657047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.657078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.657362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.657393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.657536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.657569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.657766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.657797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.657981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.658013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.658195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.658226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.658408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.658439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.658642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.658674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.658938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.658969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.659215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.082 [2024-12-13 06:42:23.659247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.082 qpair failed and we were unable to recover it. 00:36:32.082 [2024-12-13 06:42:23.659378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.659410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.659659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.659692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.659797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.659828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.660012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.660043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.660229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.660260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.660507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.660540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.660718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.660750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.660942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.660978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.661096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.661127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.661312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.661344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.661556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.661589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.661692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.661723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.661907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.661939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.662153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.662184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.662457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.662490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.662681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.662713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.662925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.662956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.663195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.663227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.663483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.663517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.663770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.663801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.663984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.664015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.664289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.664321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.664468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.664501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.664608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.664639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.664811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.664842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.665115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.665147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.665266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.665297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.665492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.665525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.665652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.665684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.665877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.665909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.666150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.666182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.666301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.666333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.666501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.666534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.666780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.666988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.667020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.667192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.667224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.667526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.667559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.667799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.667831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.083 [2024-12-13 06:42:23.667950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.083 [2024-12-13 06:42:23.667981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.083 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.668171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.668203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.668325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.668356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.668546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.668579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.668825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.668857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.669140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.669171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.669372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.669403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.669679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.669712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.670009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.670179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.670216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.670476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.670509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.670681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.670712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.670907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.670938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.671144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.671176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.671475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.671508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.671732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.671947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.671978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.672246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.672277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.672409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.672441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.672647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.672680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.672811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.672842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.673107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.673138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.673377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.673410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.673620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.673653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.673890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.673921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.674112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.674143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.674345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.674376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.674625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.674659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.674845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.674877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.675077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.675108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.675277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.675308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.675496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.675529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.675665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.675696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.675935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.675967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.676204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.676235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.676488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.676708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.676740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.676863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.676895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.677094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.677125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.677301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.677333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.677467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.677499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.677687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.677718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.084 [2024-12-13 06:42:23.677896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.084 [2024-12-13 06:42:23.677928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.084 qpair failed and we were unable to recover it. 00:36:32.085 [2024-12-13 06:42:23.678117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.085 [2024-12-13 06:42:23.678148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.085 qpair failed and we were unable to recover it. 00:36:32.085 [2024-12-13 06:42:23.678359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.085 [2024-12-13 06:42:23.678391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.085 qpair failed and we were unable to recover it. 00:36:32.085 [2024-12-13 06:42:23.678568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.085 [2024-12-13 06:42:23.678602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.085 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.678731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.678761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.678954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.678986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.679105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.679137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.679374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.679471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.679750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.679787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.679997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.680030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.680135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.680167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.680374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.680405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.680685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.680718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.680896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.680928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.681049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.681080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.681251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.681282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.681509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.681542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.681731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.681763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.682031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.682062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.682186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.682217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.682463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.682495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.682762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.682794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.682971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.683002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.683195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.683227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.683348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.683380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.683583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.683616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.683857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.683889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.684012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.684043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.684280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.684311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.684526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.684559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.684761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.684793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.684916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.684948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.685116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.685147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.685328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.685359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.685469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.685509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.685686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.685718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.685835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.685866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.686109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.686141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.686321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.686352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.686477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.686510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.367 [2024-12-13 06:42:23.686653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.367 [2024-12-13 06:42:23.686684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.367 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.686925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.686956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.687194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.687226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.687410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.687441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.687670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.687704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.687821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.687855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.688038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.688069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.688275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.688307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.688493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.688526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.688713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.688744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.688937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.688969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.689076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.689107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.689349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.689380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.689510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.689543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.689711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.689742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.689985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.690017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.690149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.690179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.690418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.690458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.690651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.690683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.690862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.690894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.691137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.691169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.691384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.691416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.691533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.691566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.691743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.691774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.692065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.692279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.692488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.692639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.692781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.692989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.693020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.693259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.693291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.693491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.693525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.693710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.693740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.693864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.693895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.694066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.694103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.694285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.694317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.694434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.694474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.694603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.694634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.368 [2024-12-13 06:42:23.694751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.368 [2024-12-13 06:42:23.694782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.368 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.694893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.694924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.695044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.695075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.695196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.695227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.695516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.695550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.695791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.695822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.696110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.696142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.696329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.696361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.696529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.696563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.696821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.696852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.697117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.697148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.697329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.697361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.697482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.697516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.697758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.697789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.697989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.698021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.698278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.698309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.698492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.698525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.698677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.698708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.698910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.698941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.699078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.699109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.699396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.699427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.699709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.699742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.699851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.699882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.700148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.700180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.700352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.700383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.700641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.700674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.700950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.700982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.701281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.701313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.701578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.701612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.701793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.701824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.702062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.702093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.702283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.702314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.702441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.702481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.702663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.702695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.702882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.702913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.703033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.703065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.703187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.703224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.703408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.703440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.703598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.703630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.369 qpair failed and we were unable to recover it. 00:36:32.369 [2024-12-13 06:42:23.703822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.369 [2024-12-13 06:42:23.703853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.704041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.704072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.704333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.704365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.704534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.704566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.704746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.704778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.704969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.705000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.705182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.705213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.705472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.705505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.705686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.705718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.705898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.705929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.706157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.706335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.706367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.706487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.706520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.706759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.706790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.707058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.707090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.707207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.707238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.707344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.707376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.707555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.707588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.707797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.707828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.708012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.708043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.708175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.708206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.708379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.708410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.708625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.708657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.708793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.708824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.709877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.709909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.710020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.710052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.710176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.710208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.710396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.710428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.710630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.710662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.710831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.710862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.711033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.711066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.711190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.711222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.711383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.711565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.711598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.370 qpair failed and we were unable to recover it. 00:36:32.370 [2024-12-13 06:42:23.711741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.370 [2024-12-13 06:42:23.711773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.712011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.712043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.712160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.712192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.712365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.712397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.712681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.712714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.712899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.712932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.713051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.713083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.713253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.713284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.713461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.713494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.713685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.713717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.713844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.713875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.714017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.714048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.714178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.714211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.714322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.714353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.714547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.714579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.714815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.714846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.715016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.715048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.715191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.715222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.715409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.715441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.715647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.715679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.715805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.715837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.716108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.716257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.716503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.716658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.716886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.716992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.717137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.717352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.717508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.717719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.717922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.717953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.371 qpair failed and we were unable to recover it. 00:36:32.371 [2024-12-13 06:42:23.718143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.371 [2024-12-13 06:42:23.718175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.718296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.718328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.718466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.718499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.718623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.718655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.718785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.718816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.718921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.718952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.719166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.719202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.719418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.719466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.719588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.719620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.719811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.719841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.720024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.720056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.720189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.720222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.720485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.720518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.720696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.720727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.720911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.720943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.721057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.721087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.721271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.721302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.721549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.721582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.721698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.721730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.721921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.721952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.722226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.722258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.722445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.722485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.722698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.722730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.722936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.722967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.723145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.723177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.723348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.723380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.723516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.723549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.723831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.723862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.724045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.724076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.724318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.724349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.724557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.724591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.724847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.724877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.725073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.725105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.725298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.725330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.725532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.725565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.725747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.725778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.725964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.725996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.726182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.726212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.726483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.372 [2024-12-13 06:42:23.726516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.372 qpair failed and we were unable to recover it. 00:36:32.372 [2024-12-13 06:42:23.726771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.726802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.726982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.727013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.727338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.727369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.727540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.727572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.727779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.727811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.728068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.728098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.728269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.728300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.728482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.728521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.728714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.728745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.728923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.728953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.729150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.729181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.729316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.729347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.729591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.729623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.729891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.729922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.730109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.730141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.730266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.730297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.730475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.730507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.730706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.730737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.730913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.730944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.731070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.731101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.731347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.731379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.731628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.731660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.731848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.731879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.732169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.732200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.732439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.732492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.732680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.732711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.732920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.732952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.733060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.733092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.733281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.733313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.733580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.733613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.733785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.733816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.734901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.734932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.735114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.735144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.735335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.373 [2024-12-13 06:42:23.735367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.373 qpair failed and we were unable to recover it. 00:36:32.373 [2024-12-13 06:42:23.735635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.735667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.735786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.735817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.735994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.736026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.736269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.736300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.736564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.736598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.736733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.736764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.736891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.736922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.737212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.737244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.737437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.737485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.737682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.737714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.737907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.737938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.738066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.738097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.738277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.738308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.738544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.738576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.738757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.738789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.738996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.739026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.739212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.739242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.739431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.739471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.739644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.739674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.739858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.739887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.739992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.740022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.740215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.740244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.740438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.740475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.740609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.740638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.740881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.740911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.741099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.741127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.741330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.741359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.741528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.741559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.741728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.741757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.741963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.741992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.742172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.742201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.742388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.742418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.742602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.742632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.742813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.742843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.743034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.743064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.743328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.743359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.743491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.743522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.743696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.743726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.743910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.374 [2024-12-13 06:42:23.743939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.374 qpair failed and we were unable to recover it. 00:36:32.374 [2024-12-13 06:42:23.744129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.744159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.744287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.744317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.744501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.744532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.744638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.744670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.744846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.744876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.745054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.745087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.745285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.745316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.745430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.745479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.745626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.745657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.745772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.745808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.746001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.746032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.746216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.746247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.746420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.746461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.746673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.746706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.746923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.746954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.747086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.747118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.747291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.747323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.747431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.747473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.747646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.747679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.747895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.747928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.748179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.748211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.748329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.748361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.748471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.748504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.748646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.748678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.748919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.748955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.749188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.749391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.749545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.749707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.749862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.749977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.750129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.750378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.750529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.750772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.750920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.750951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.375 qpair failed and we were unable to recover it. 00:36:32.375 [2024-12-13 06:42:23.751161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.375 [2024-12-13 06:42:23.751231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.751441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.751500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.751685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.751718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.751969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.752002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.752263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.752296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.752475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.752511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.752727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.752759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.752961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.752993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.753123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.753156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.753341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.753374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.753495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.753528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.753767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.753800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.754064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.754095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.754213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.754245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.754468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.754502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.754708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.754740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.754851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.754884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.755000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.755032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.755215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.755247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.755439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.755481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.755767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.755800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.755916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.755948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.756091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.756123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.756311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.756342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.756581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.756616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.756742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.756774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.757014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.757047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.757180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.757217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.757442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.757484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.757669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.757701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.757880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.757911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.758039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.758072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.758263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.758295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.758468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.758500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.758763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.758795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.758986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.759021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.759148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.759180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.376 [2024-12-13 06:42:23.759366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.376 [2024-12-13 06:42:23.759399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.376 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.759588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.759621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.759856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.759888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.760011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.760043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.760298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.760331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.760525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.760559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.760684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.760716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.760974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.761006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.761181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.761214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.761481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.761514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.761716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.761750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.761992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.762205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.762363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.762582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.762721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.762958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.762990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.763174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.763206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.763457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.763490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.763610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.763642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.763760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.763792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.763975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.764006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.764233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.764264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.764369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.764401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.764651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.764683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.764792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.764823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.765002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.765035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.765222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.765253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.765429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.765473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.765682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.765714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.765995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.766224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.766381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.766601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.766747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.766967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.766998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.767181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.767213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.767456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.767489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.767659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.767691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.377 qpair failed and we were unable to recover it. 00:36:32.377 [2024-12-13 06:42:23.767862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.377 [2024-12-13 06:42:23.767893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.768097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.768128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.768255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.768287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.768408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.768439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.768639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.768672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.768916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.768948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.769090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.769249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.769503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.769640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.769883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.769992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.770023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.770198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.770230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.770429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.770474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.770649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.770681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.770854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.770885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.771061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.771095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.771344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.771377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.771527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.771561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.771679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.771710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.771896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.771928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.772903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.772933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.773122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.773154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.773367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.773398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.773657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.773690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.773880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.773911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.774088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.774125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.774309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.774462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.774494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.774673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.774704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.774913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.774944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.775138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.775169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.775293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.775325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.775518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.775551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.378 qpair failed and we were unable to recover it. 00:36:32.378 [2024-12-13 06:42:23.775726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.378 [2024-12-13 06:42:23.775758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.775959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.775991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.776174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.776206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.776418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.776467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.776711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.776742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.776870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.776902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.777150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.777182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.777359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.777391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.777587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.777620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.777743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.777775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.777899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.777931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.778050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.778082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.778250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.778282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.778517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.778549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.778747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.778779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.778976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.779128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.779345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.779494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.779736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.779939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.779971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.780142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.780175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.780302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.780332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.780444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.780502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.780625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.780658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.780760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.780792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.781031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.781065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.781330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.781363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.781482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.781516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.781623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.781654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.781787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.781823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.782036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.782078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.782316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.782360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.782561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.782595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.782724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.782755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.782942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.782974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.783220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.783253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.783372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.783403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.783517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.379 [2024-12-13 06:42:23.783549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.379 qpair failed and we were unable to recover it. 00:36:32.379 [2024-12-13 06:42:23.783676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.783709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.783882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.783914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.784080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.784111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.784370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.784402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.784614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.784647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.784874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.784906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.785143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.785174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.785444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.785489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.785753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.785784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.785911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.785942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.786112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.786144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.786410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.786444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.786558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.786589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.786848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.786880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.786997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.787029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.787274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.787312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.787443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.787492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.787734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.787768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.787959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.787991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.788104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.788137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.788407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.788439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.788590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.788623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.788890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.788923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.789137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.789168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.789345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.789378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.789561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.789595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.789769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.789802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.790039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.790072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.790204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.790235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.790456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.790490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.790682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.790714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.790954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.790987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.791226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.791258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.791462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.791506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.380 qpair failed and we were unable to recover it. 00:36:32.380 [2024-12-13 06:42:23.791680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.380 [2024-12-13 06:42:23.791711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.791923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.791954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.792072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.792103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.792369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.792400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.792599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.792632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.792798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.792829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.793012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.793043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.793281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.793311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.793564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.793808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.793840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.794078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.794109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.794315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.794347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.794517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.794549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.794798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.794829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.795089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.795121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.795289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.795321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.795507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.795540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.795742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.795774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.795988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.796019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.796148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.796179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.796308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.796340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.796527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.796558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.796734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.796766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.797027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.797058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.797253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.797284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.797484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.797517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.797738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.797771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.797978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.798010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.798123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.798154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.798412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.798444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.798697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.798728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.798847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.798878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.799011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.799043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.799279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.799310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.799495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.799527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.799629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.799660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.799768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.799799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.800011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.800042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.800218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.381 [2024-12-13 06:42:23.800249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.381 qpair failed and we were unable to recover it. 00:36:32.381 [2024-12-13 06:42:23.800438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.800495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.800689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.800721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.800968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.801000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.801116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.801147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.801388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.801419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.801627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.801660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.801783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.801813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.802112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.802144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.802358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.802390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.802511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.802543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.802657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.802689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.802951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.802982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.803112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.803143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.803353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.803542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.803576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.803752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.803783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.803975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.804006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.804135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.804167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.804405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.804436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.804630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.804661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.804896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.804928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.805041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.805073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.805262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.805294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.805409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.805440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.805628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.805659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.805831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.805862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.806057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.806089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.806335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.806367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.806603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.806636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.806766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.806796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.807067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.807217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.807424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.807585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.807813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.807999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.808030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.808206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.808237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.808432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.808489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.808610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.382 [2024-12-13 06:42:23.808641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.382 qpair failed and we were unable to recover it. 00:36:32.382 [2024-12-13 06:42:23.808825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.808856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.809050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.809087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.809334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.809366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.809548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.809581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.809702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.809733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.809906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.809938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.810054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.810086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.810265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.810295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.810424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.810463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.810637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.810669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.810921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.810952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.811167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.811198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.811412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.811443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.811636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.811667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.811904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.811935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.812123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.812155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.812341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.812654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.812687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.812813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.812844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.812975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.813247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.813403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.813566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.813723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.813934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.813966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.814137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.814168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.814317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.814348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.814527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.814560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.814748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.814782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.814960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.814991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.815092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.815122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.815357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.815388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.815498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.815530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.815713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.815744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.815881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.815912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.816081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.816112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.816296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.816329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.816486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.816520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.816719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.383 [2024-12-13 06:42:23.816751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.383 qpair failed and we were unable to recover it. 00:36:32.383 [2024-12-13 06:42:23.816944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.816977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.817088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.817120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.817382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.817420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.817550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.817581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.817752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.817784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.817960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.817991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.818187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.818218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.818329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.818360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.818477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.818510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.818631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.818662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.818854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.818885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.819057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.819088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.819203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.819234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.819425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.819462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.819714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.819745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.819983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.820157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.820348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.820559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.820801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.820949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.820981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.821161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.821193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.821444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.821483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.821613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.821644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.821766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.821798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.821917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.821948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.822132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.822163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.822352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.822384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.822566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.822599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.822787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.822820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.823002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.823034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.823203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.823234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.823406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.823438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.823639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.823672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.823784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.384 [2024-12-13 06:42:23.823815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.384 qpair failed and we were unable to recover it. 00:36:32.384 [2024-12-13 06:42:23.824002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.824230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.824375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.824548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.824685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.824883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.824915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.825847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.825878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.826049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.826080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.826267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.826298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.826419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.826460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.826636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.826668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.826843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.826874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.827087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.827118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.827314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.827345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.827519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.827551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.827764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.827959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.827990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.828206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.828238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.828406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.828438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.828646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.828679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.829013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.829045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.829166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.829197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.829361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.829393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.829646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.829679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.829918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.829950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1208578 Killed "${NVMF_APP[@]}" "$@" 00:36:32.385 [2024-12-13 06:42:23.830139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.830171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.830379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.830410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.830614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.830647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.830777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.830808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:32.385 [2024-12-13 06:42:23.831080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.831113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:32.385 [2024-12-13 06:42:23.831252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.831283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 [2024-12-13 06:42:23.831498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.385 [2024-12-13 06:42:23.831531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.385 qpair failed and we were unable to recover it. 00:36:32.385 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:32.385 [2024-12-13 06:42:23.831708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.831740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.831859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:32.386 [2024-12-13 06:42:23.831891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b9 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.386 0 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.832938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.832970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.833204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.833364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.833510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.833648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.833859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.833969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.834125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.834260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.834467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.834669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.834881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.834913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.835093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.835125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.835366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.835398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.835550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.835582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.835768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.835800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.835918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.835949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.836148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.836180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.836354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.836385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.836623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.836655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.836795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.836826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.837041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.837196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.837444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.837654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.837806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.837992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.838023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.838141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.838178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.838347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.838378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.838613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.838645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 [2024-12-13 06:42:23.838882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.386 [2024-12-13 06:42:23.838914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.386 qpair failed and we were unable to recover it. 00:36:32.386 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=1209280 00:36:32.387 [2024-12-13 06:42:23.839166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.839198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 1209280 00:36:32.387 [2024-12-13 06:42:23.839324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.839356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:32.387 [2024-12-13 06:42:23.839473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.839511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1209280 ']' 00:36:32.387 [2024-12-13 06:42:23.839723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.839755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.839964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.839999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:32.387 [2024-12-13 06:42:23.840259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.840291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:32.387 [2024-12-13 06:42:23.840495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.840528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:32.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:32.387 [2024-12-13 06:42:23.840742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.840776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.840912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.840944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.841128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.841160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 06:42:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.387 [2024-12-13 06:42:23.841403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.841436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.841619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.841651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.841887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.841919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.842872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.842908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.843907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.843940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.844117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.844149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.844255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.844290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.844493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.844526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.844772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.844803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.844916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.844948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.845158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.845191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.845315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.845346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.845558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.845592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.845716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.845748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.387 [2024-12-13 06:42:23.845887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.387 [2024-12-13 06:42:23.845918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.387 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.846824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.846855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.847114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.847152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.847346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.847377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.847498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.847532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.847640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.847672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.847801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.847833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.848954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.848987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.849091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.849129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.849321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.849350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.849488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.849521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.849694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.849726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.849908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.849941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.850925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.850956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.851987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.852095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.852126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.852367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.852398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.852622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.852655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.852768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.852800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.388 qpair failed and we were unable to recover it. 00:36:32.388 [2024-12-13 06:42:23.852967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.388 [2024-12-13 06:42:23.853000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.853139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.853266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.853404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.853644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.853867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.853986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.854018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.854205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.854238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.854477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.854509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.854639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.854671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.854781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.854813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.855073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.855105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.855351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.855383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.855540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.855574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.855772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.855803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.855922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.855953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.856946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.856977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.857160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.857191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.857365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.857398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.857580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.857613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.857727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.857770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.858052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.858084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.858208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.858241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.858422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.858488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.858681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.858713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.858894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.858927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.859045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.859077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.859285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.859316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.859554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.859589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.389 [2024-12-13 06:42:23.859766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.389 [2024-12-13 06:42:23.859798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.389 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.860091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.860315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.860487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.860619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.860829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.860979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.861009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.861135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.861171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.861292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.861324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.861517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.861549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.861825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.861857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.861975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.862198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.862362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.862514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.862734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.862874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.862904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.863028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.863060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.863212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.863284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.863415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.863465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.863650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.863683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.863808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.863840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.864032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.864064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.864235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.864267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.864463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.864497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.864698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.864731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.864920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.864952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.865073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.865106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.865224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.865257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.865471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.865505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.865648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.865680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.865870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.865911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.866035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.866068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.866256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.866288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.866402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.866433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.866551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.866584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.866869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.866901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.867139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.867171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.867278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.867310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.390 [2024-12-13 06:42:23.867430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.390 [2024-12-13 06:42:23.867474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.390 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.867600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.867632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.867819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.867851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.868269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.868424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.868680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.868835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.868975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.869008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.869129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.869161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.869420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.869462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.869650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.869682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.869873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.869906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.870089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.870121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.870308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.870340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.870516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.870550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.870680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.870712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.870928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.870960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.871205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.871239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.871418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.871503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.871763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.871834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.871990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.872030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.872222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.872254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.872362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.872394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.872599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.872635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.872756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.872791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.872978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.873011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.873274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.873307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.873487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.873521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.873714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.873747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.873919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.873952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.874054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.874093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.874267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.874305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.874495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.874529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.874789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.874822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.874936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.874969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.875140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.875172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.875368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.875400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.875600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.391 [2024-12-13 06:42:23.875635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.391 qpair failed and we were unable to recover it. 00:36:32.391 [2024-12-13 06:42:23.875829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.875862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.876033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.876065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.876172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.876205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.876348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.876380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.876566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.876600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.876817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.876849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.877937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.877970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.878098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.878131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.878242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.878274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.878379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.878411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.878677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.878711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.878985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.879017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.879207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.879238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.879354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.879386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.879583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.879616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.879819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.879860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.880044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.880077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.880249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.880281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.880411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.880442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.880745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.880931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.880963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.881133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.881165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.881354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.881386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.881513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.881546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.881667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.881699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.881892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.881924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.882126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.882157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.882421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.882462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.882733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.882945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.882977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.883108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.883140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.883319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.883351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.883544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.883577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.883715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.392 [2024-12-13 06:42:23.883747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.392 qpair failed and we were unable to recover it. 00:36:32.392 [2024-12-13 06:42:23.883984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.884929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.884961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.885133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.885165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.885343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.885386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.885573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.885606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.885817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.885849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.886033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.886065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.886306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.886338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.886466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.886499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.886632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.886664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.886855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.886887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.887917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.887949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.888002] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:32.393 [2024-12-13 06:42:23.888041] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:32.393 [2024-12-13 06:42:23.888057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.888089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.888313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.888345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.888613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.888645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.888772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.888803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.888972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.889281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.889434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.889601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.889761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.889920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.889952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.890063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.890095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.890204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.890236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.393 qpair failed and we were unable to recover it. 00:36:32.393 [2024-12-13 06:42:23.890415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.393 [2024-12-13 06:42:23.890464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.890571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.890603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.890709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.890741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.890988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.891021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.891174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.891344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.891376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.891555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.891589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.891779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.891810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.891980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.892178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.892327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.892597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.892758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.892963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.892994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.893129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.893161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.893356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.893387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.893511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.893544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.893757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.893789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.893979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.894185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.894333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.894602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.894753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.894955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.894987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.895251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.895283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.895460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.895492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.895730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.895763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.895998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.896035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.896286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.896318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.896439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.896481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.896653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.896685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.896882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.896914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.897030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.897062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.897170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.897202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.897387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.897419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.897639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.897685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.897806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.897839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.898021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.898053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.898228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.898260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.394 qpair failed and we were unable to recover it. 00:36:32.394 [2024-12-13 06:42:23.898499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.394 [2024-12-13 06:42:23.898532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.898651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.898683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.898883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.898914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.899089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.899120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.899360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.899391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.899616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.899648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.899836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.899867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.900054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.900085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.900301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.900333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.900464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.900497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.900744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.900776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.900973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.901004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.901125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.901156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.901338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.901369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.901544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.901577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.901818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.901857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.902111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.902259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.902500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.902714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.902862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.902977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.903009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.903214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.903246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.903362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.903393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.903592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.903625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.903808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.903838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.904050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.904081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.904205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.904235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.904462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.904503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.904701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.904733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.904901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.904932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.905066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.905097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.905340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.905372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.905618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.905651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.905826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.905858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.906097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.906128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.906316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.906347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.906538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.906570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.906745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.395 [2024-12-13 06:42:23.906776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.395 qpair failed and we were unable to recover it. 00:36:32.395 [2024-12-13 06:42:23.906984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.907015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.907195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.907226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.907413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.907445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.907651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.907684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.907868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.907899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.908941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.908972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.909104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.909135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.909247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.909278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.909536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.909569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.909694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.909724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.909841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.909872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.910132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.910174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.910372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.910408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.910676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.910708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.910885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.910917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.911041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.911073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.911311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.911342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.911493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.911526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.911719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.911752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.911940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.912110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.912141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.912382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.912413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.912603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.912636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.912880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.912912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.913164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.913195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.913405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.913437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.913600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.913632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.913808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.913841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.914029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.914062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.914324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.914356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.914530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.914562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.914737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.914769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.914948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.914980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.915094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.915125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.396 qpair failed and we were unable to recover it. 00:36:32.396 [2024-12-13 06:42:23.915323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.396 [2024-12-13 06:42:23.915355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.915541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.915575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.915709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.915741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.915986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.916024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.916207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.916245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.916484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.916517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.916704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.916736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.916864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.916896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.917102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.917134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.917275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.917307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.917436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.917477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.917665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.917697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.917872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.917903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.918114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.918145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.918336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.918368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.918557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.918590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.918834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.918866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.919099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.919130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.919375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.919407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.919661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.919694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.919824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.919856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.920064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.920097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.920357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.920388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.920514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.920546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.920813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.920845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.921030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.921061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.921251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.921283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.921470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.921503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.921774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.921806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.921992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.922023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.922198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.922230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.922402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.922439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.922583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.922616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.922882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.922913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.923083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.923115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.923322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.923354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.923527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.923559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.923732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.923764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.923941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.923972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.924184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.397 [2024-12-13 06:42:23.924215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.397 qpair failed and we were unable to recover it. 00:36:32.397 [2024-12-13 06:42:23.924389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.924421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.924615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.924647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.924910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.924941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.925056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.925088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.925206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.925239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.925446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.925488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.925676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.925708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.925826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.925858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.926032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.926064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.926178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.926210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.926468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.926502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.926695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.926733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.926916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.926948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.927137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.927169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.927318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.927350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.927603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.927636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.927828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.927859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.927994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.928025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.928207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.928238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.928356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.928388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.928592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.928625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.928802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.928834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.929032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.929065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.929311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.929343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.929460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.929492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.929620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.929652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.929935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.929967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.930170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.930202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.930336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.930367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.930551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.930586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.930703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.930735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.930942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.930973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.931162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.931200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.931391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.398 [2024-12-13 06:42:23.931423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.398 qpair failed and we were unable to recover it. 00:36:32.398 [2024-12-13 06:42:23.931622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.931653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.931772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.931804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.931931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.931962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.932133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.932164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.932358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.932390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.932582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.932881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.932912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.933101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.933132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.933352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.933383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.933627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.933660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.933849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.933880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.934070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.934107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.934357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.934388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.934665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.934698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.934971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.935002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.935192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.935223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.935395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.935425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.935561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.935593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.935795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.935825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.936092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.936123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.936373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.936404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.936554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.936586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.936798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.936828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.937000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.937030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.937282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.937313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.937494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.937528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.937815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.937846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.937971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.938002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.938133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.938165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.938339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.938370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.938647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.938680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.938893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.938924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.939117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.939147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.939275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.939307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.939429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.939471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.939707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.939738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.939911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.939943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.940182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.940213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.940336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.399 [2024-12-13 06:42:23.940369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.399 qpair failed and we were unable to recover it. 00:36:32.399 [2024-12-13 06:42:23.940502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.940535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.940721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.940752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.940871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.940901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.941022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.941053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.941179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.941210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.941385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.941416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.941603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.941635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.941921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.941952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.942166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.942341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.942564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.942716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.942869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.942985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.943126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.943349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.943510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.943674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.943905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.943936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.944105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.944136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.944264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.944295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.944481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.944513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.944768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.944800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.944923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.944953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.945151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.945182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.945443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.945485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.945679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.945711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.945882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.945913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.946086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.946117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.946234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.946265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.946460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.946492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.946677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.946710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.946897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.946929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.400 [2024-12-13 06:42:23.947801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.400 [2024-12-13 06:42:23.947832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.400 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.948089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.948161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.948289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.948326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.948477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.948512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.948781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.948814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.948937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.948969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.949162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.949194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.949378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.949411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.949541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.949574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.949695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.949726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.949899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.949932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.950117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.950149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.950387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.950418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.950570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.950606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.950792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.950828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.950943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.950973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.951104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.951135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.951307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.951338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.951468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.951501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.951678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.951710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.951853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.951885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.952094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.952125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.952314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.952344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.952533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.952565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.952769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.952800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.952980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.953010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.953274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.953305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.953541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.953573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.953710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.953741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.953849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.953879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.953981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.954195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.954341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.954508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.954649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.954864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.954895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.955139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.955170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.955354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.955385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.955509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.955542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.401 [2024-12-13 06:42:23.955741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.401 [2024-12-13 06:42:23.955773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.401 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.955942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.955974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.956216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.956286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.956478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.956515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.956704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.956737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.956864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.956897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.957007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.957038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.957161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.957193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.957315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.957348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.957588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.957624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.957849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.957881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.958125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.958156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.958349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.958380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.958507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.958539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.958797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.958829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.958965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.958996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.959185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.959217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.959320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.959352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.959534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.959567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.959927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.960115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.960147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.960332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.960365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.960496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.960530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.960705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.960738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.960916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.960949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.961084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.961116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.961290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.961322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.961439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.961488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.961676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.961708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.961845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.961882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.962013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.962044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.962303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.962336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.962582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.962614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.962789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.962820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.963002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.963035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.402 [2024-12-13 06:42:23.963165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.402 [2024-12-13 06:42:23.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.402 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.963373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.963404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.963533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.963565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.963672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.963704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.963917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.963949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.964061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.964093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.964270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.964302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.964544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.964577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.964706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.964739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.964921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.964953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.965059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.965091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.965281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.965313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.965514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.965547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.965744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.965776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.965900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.965933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.966069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.966100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.966276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.966308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.966486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.966518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.966693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.966726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.966923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.966955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.967089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.967120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.967242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.967284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.967500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.967533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.967643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.967675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.967851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.967886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.968919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.968951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.969195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.969227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.969353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.969385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.969623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.969656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.969784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.969817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.969933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.969964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.970155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.970187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.970291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.970323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.970496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.970535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.970640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.403 [2024-12-13 06:42:23.970672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.403 qpair failed and we were unable to recover it. 00:36:32.403 [2024-12-13 06:42:23.970863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.970895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.971019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.971051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.971314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.971347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.971479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.971512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.971739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.971772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.971905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.971938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.972903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.972935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.973829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:32.404 [2024-12-13 06:42:23.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.973959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.974132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.974164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.974273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.974306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.974501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.974534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.974713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.974745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.974884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.974917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.975063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.975205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.975433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.975678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.975819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.975994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.976238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.976408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.976572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.976734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.976905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.976937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.977139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.977172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.977347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.977384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.977504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.977537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.977662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.977693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.977957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.977989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.404 qpair failed and we were unable to recover it. 00:36:32.404 [2024-12-13 06:42:23.978180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.404 [2024-12-13 06:42:23.978214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.978337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.978369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.978568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.978602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.978789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.978820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.978988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.979020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.979224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.979256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.979434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.979473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.979601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.979634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.979848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.979880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.980069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.980101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.980301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.980334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.980515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.980547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.980735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.980767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.980896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.980928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.981075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.981350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.981530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.981672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.981894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.981994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.982149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.982294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.982436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.982756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.982938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.982971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.983090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.983122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.983235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.983267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.983397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.983432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.983636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.983670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.983805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.983838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.984010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.984043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.984309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.984342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.984476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.984510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.984626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.984658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.984864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.984896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.985008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.985040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.985212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.985244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.985384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.985425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.985625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.985659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.405 [2024-12-13 06:42:23.985868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.405 [2024-12-13 06:42:23.985900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.405 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.986124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.986156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.986271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.986314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.986425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.986466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.986575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.986606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.986803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.986835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.987893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.987931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.988107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.988139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.988261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.988292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.988404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.988649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.988682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.988880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.988911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.989091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.989122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.989304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.989336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.989532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.989566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.989681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.989712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.989976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.990007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.990183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.990391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.990423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.990545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.990578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.990890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.990923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.991188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.991220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.991350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.991383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.991643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.991675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.991812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.991843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.991962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.991994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.992179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.992212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.992315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.992346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.992585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.992618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.992745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.992779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.992906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.992944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.993125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.993162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.993370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.993408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.993578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.993633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.406 [2024-12-13 06:42:23.993821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.406 [2024-12-13 06:42:23.993853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.406 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.993979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.994116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.994288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.994582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.994753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.994905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.994937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995831] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:32.407 [2024-12-13 06:42:23.995845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.995865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:32.407 [2024-12-13 06:42:23.995873] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:32.407 [2024-12-13 06:42:23.995879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:32.407 [2024-12-13 06:42:23.995877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b9[2024-12-13 06:42:23.995886] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:32.407 0 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.995990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.996132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.996273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.996494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.996716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.996929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.997085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.997117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.997330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.997366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.407 [2024-12-13 06:42:23.997305] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.997396] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:32.407 [2024-12-13 06:42:23.997525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:32.407 [2024-12-13 06:42:23.997526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:32.407 [2024-12-13 06:42:23.997711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.997781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.997980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.998019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.998288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.998323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.998429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.998470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.407 [2024-12-13 06:42:23.998605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.407 [2024-12-13 06:42:23.998638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.407 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:23.998766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:23.998800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:23.999066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:23.999098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:23.999341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:23.999377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:23.999499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:23.999537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:23.999732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:23.999762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.000001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.000034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.000227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.000259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.000512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.000545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.000675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.000707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.000820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.000850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.001837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.001867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.002044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.002075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.686 qpair failed and we were unable to recover it. 00:36:32.686 [2024-12-13 06:42:24.002200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.686 [2024-12-13 06:42:24.002231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.002355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.002386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.002583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.002615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.002783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.002814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.002931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.002963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.003109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.003265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.003562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.003705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.003859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.003978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.004180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.004357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.004573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.004743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.004894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.004926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.005051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.005081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.005275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.005307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.005425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.005465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.005653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.005684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.005954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.006093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.006124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.006305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.006336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.006461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.006686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.006718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.006851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.006884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.007023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.007055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.007243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.007275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.007520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.007555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.007732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.007764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.007948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.007981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.008160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.008192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.008366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.008399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.008538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.008583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.008883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.008918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.009280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.009314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.009427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.009468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.009662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.009695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.009815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.009847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.010053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.010191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.010331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.010554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.010768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.010975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.011131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.011280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.011462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.011638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.011788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.011819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.687 [2024-12-13 06:42:24.012062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.687 [2024-12-13 06:42:24.012097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.687 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.012289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.012322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.012460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.012494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.012607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.012640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.012751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.012784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.012908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.012941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.013117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.013150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.013277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.013311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.013438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.013479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.013669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.013704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.013985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.014163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.014320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.014537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.014754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.014910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.014942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.015118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.015161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.015279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.015313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.015487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.015522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.015711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.015743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.015927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.015962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.016960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.016991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.017227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.017260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.017383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.017425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.017576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.017610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.017801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.017835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.017961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.017992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.018115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.018148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.018272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.018304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.018436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.018482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.018607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.018641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.018890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.018925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.019926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.019961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.020116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.020270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.020428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.020656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.020884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.020997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.021029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.021219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.021252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.021364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.021397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.021654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.021690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.021870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.021904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.022061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.022095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.022211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.022244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.022366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.022399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.022559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.022594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.688 [2024-12-13 06:42:24.022716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.688 [2024-12-13 06:42:24.022751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.688 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.022945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.022978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.023930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.023963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.024952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.024984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.025836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.025868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.026896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.026928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.027926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.027960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.028157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.028190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.028313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.028348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.689 qpair failed and we were unable to recover it. 00:36:32.689 [2024-12-13 06:42:24.028469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.689 [2024-12-13 06:42:24.028504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.028635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.028668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.028776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.028807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.028986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.029953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.692 [2024-12-13 06:42:24.029996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.692 qpair failed and we were unable to recover it. 00:36:32.692 [2024-12-13 06:42:24.030118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.030148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.030244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.030398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.030435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.030628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.030658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.030832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.030862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.030971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.031925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.031954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.032874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.032903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.033955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.033984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.034086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.034115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.034215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.034243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.034429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.034470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.034652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.034682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.693 [2024-12-13 06:42:24.034801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.693 [2024-12-13 06:42:24.034830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.693 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.034995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.035963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.035991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.036929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.036958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.037953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.037982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.038969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.038997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.039875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.039985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.040948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.040974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.041949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.041976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.042900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.042926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.694 [2024-12-13 06:42:24.043738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.694 qpair failed and we were unable to recover it. 00:36:32.694 [2024-12-13 06:42:24.043908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.043935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.044735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.044762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.045917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.045943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.046915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.046942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.047825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.047852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.048921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.048949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.049935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.049961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.050100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.050235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.050492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.050684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.050815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.050975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.051912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.051937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.052886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.052993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.053018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.053108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.695 [2024-12-13 06:42:24.053132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.695 qpair failed and we were unable to recover it. 00:36:32.695 [2024-12-13 06:42:24.053235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.053435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.053591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.053713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.053830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.053965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.053990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.054966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.054990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.055885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.055910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.056885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.056910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.696 [2024-12-13 06:42:24.057799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.696 [2024-12-13 06:42:24.057823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.696 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.057917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.057942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.058098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.058122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.058327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.058351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.058524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.058589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.058747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.058800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.059083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.059233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.059444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.059673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.059876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.059981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.060018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.060180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.060208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.060361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.060389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.060499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.060528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.060716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.060744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.061024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.061053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.061329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.061363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.061545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.061574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.061691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.061720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.061828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.061856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.062039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.062068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.062183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.062212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.062383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.062418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.062704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.062734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.062966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.062995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.063116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.063145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.063332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.063360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.063602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.063632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.063741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.063770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.063955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.063983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.064927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.064958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.065071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.065103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.065212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.065244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.065366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.065397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.697 [2024-12-13 06:42:24.065523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.697 [2024-12-13 06:42:24.065556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.697 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.065732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.065763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.065898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.065929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.066059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.066091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.066287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.066329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.066511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.066545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.066659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.066690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.066809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.066841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.067816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.067848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.068054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.068085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.068350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.068382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.068555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.068588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.068773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.068805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.068941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.068974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.069962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.069993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.070213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.070245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.070353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.070384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.070491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.070526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.070717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.070749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.070923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.070954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.071962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.072938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.072970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.073209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.073242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.073422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.073462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.073643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.073675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.073853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.073885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.073996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.074223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.074365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.074543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.074746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.698 [2024-12-13 06:42:24.074902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.698 [2024-12-13 06:42:24.074935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.698 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.075121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.075153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.075313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.075481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.075514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.075638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.075670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.075840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.075872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.076141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.076174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.076360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.076392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.076546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.076579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.076772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.076804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.076992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.077154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.077302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.077598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.077744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.077953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.077984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.078217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.078249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.078359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.078391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.078508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.078541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.078639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.078671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.078798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.078830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.079030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.079064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.079192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.079223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.079394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.079426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.079555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.079587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.079840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.079871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.080912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.080943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.081122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.699 [2024-12-13 06:42:24.081154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.699 qpair failed and we were unable to recover it. 00:36:32.699 [2024-12-13 06:42:24.081390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.081421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.081547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.081585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.081760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.081791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.081902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.081933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.082942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.082974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.083846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.083878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.084048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.084080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.084250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.084281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.084477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.084510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.084627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.084659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.084771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.084802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.085890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.085921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.086096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.086127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.086334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.086373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.086579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.086613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.700 qpair failed and we were unable to recover it. 00:36:32.700 [2024-12-13 06:42:24.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.700 [2024-12-13 06:42:24.086777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.086949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.086980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.087090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.087121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.087249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.087280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.087469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.087502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.087612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.087643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.087811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.087842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.088925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.088956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.089130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.089160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.089398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.089428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.089545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.089577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.089697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.089728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.089919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.089950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.090060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.090091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.090379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.090410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.090526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.090558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.090683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.090714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.090822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.090853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.091937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.091968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.092085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.092116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.092242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.092273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.092447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.092488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.092736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.092768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.092875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.092905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.093024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.093055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.093230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.701 [2024-12-13 06:42:24.093261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.701 qpair failed and we were unable to recover it. 00:36:32.701 [2024-12-13 06:42:24.093384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.093416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.093614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.093664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.093805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.093841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.093969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.094125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.094330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:32.702 [2024-12-13 06:42:24.094484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.094630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.094792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:32.702 [2024-12-13 06:42:24.094826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.094940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.094973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.095077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.095109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:32.702 [2024-12-13 06:42:24.095300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.095332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:32.702 [2024-12-13 06:42:24.095509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.095544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.095670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.095703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.702 [2024-12-13 06:42:24.095803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.095836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.096963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.096997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.097151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.097368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.097513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.097654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.097791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.097980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.098018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.098135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.098167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.098436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.098483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.098622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.098653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.098770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.098802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.099863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.099895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.100125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.100351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.100526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.100684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.100842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.100982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.101014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.101200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.101232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.101415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.101458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.101703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.101736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.101997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.102030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.102150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.102181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.102372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.702 [2024-12-13 06:42:24.102404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.702 qpair failed and we were unable to recover it. 00:36:32.702 [2024-12-13 06:42:24.102530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.102563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.102693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.102914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.102946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.103864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.103996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.104931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.104962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.105105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.105321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.105526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.105666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.105805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.105984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.106127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.106297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.106504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.106645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.106900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.107946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.107980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.108130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.108277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.108507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.108664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.108892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.108998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.109865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.109973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.110004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.110128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.703 [2024-12-13 06:42:24.110160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.703 qpair failed and we were unable to recover it. 00:36:32.703 [2024-12-13 06:42:24.110334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.110367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.110488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.110521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.110642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.110674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.110800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.110831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.110946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.110977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.111897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.111934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.112909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.112943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.113867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.113899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.114941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.114972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.115175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.115428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.115590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.115729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.115875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.704 qpair failed and we were unable to recover it. 00:36:32.704 [2024-12-13 06:42:24.115987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.704 [2024-12-13 06:42:24.116019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.116131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.116164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.116280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.116312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.116535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.116569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.116693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.116725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.116836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.116867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.117902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.117935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.118894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.118927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.119949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.119981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.120108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.120140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.120268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.120301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.120403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.120435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.120636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.120669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.120846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.120879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.121896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.705 [2024-12-13 06:42:24.121931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.705 qpair failed and we were unable to recover it. 00:36:32.705 [2024-12-13 06:42:24.122045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.122177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.122327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.122490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.122643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.122796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.122829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.123819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.123851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.124947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.124979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.125924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.125956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.126872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.126906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.127785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.127817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.128010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.706 [2024-12-13 06:42:24.128041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.706 qpair failed and we were unable to recover it. 00:36:32.706 [2024-12-13 06:42:24.128161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.128193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.128303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.128334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.128441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.128485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.128611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.128643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.128758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.128790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.128971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.129003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.129175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.129208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b9 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:32.707 0 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.129338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.129371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.129594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:32.707 [2024-12-13 06:42:24.129635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.129761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.129793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.129911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.129945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.130126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.130167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.707 [2024-12-13 06:42:24.130287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.130321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.130541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.130656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.130684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.130785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.130815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.707 qpair failed and we were unable to recover it. 00:36:32.707 [2024-12-13 06:42:24.130988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.707 [2024-12-13 06:42:24.131017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.131952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.131981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.708 [2024-12-13 06:42:24.132760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.708 [2024-12-13 06:42:24.132788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.708 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.132897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.132934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.133883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.133912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.134029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.134059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.134173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.134203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.709 [2024-12-13 06:42:24.134327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.709 [2024-12-13 06:42:24.134356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.709 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.134466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.134496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.134617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.134647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.134741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.134769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.134936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.134964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.135070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.135099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.135277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.135307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.135417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.135446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.135639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.135669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.710 qpair failed and we were unable to recover it. 00:36:32.710 [2024-12-13 06:42:24.135776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.710 [2024-12-13 06:42:24.135805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.135908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.135939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.711 [2024-12-13 06:42:24.136805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.711 qpair failed and we were unable to recover it. 00:36:32.711 [2024-12-13 06:42:24.136909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.136938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.137884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.137913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.138029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.712 [2024-12-13 06:42:24.138057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.712 qpair failed and we were unable to recover it. 00:36:32.712 [2024-12-13 06:42:24.138171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.138321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.138446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.138591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.138736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.138874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.138903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.139118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.139259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.139417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.139675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.713 [2024-12-13 06:42:24.139830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.713 qpair failed and we were unable to recover it. 00:36:32.713 [2024-12-13 06:42:24.139940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.139971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.140869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.140899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.141078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.141107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.141202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.141230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.141342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.714 [2024-12-13 06:42:24.141369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.714 qpair failed and we were unable to recover it. 00:36:32.714 [2024-12-13 06:42:24.141532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.141560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.141797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.141846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.715 qpair failed and we were unable to recover it. 00:36:32.715 [2024-12-13 06:42:24.142863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.715 [2024-12-13 06:42:24.142895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.143943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.143972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.144923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.144949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.719 [2024-12-13 06:42:24.145118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.719 [2024-12-13 06:42:24.145144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.719 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.145258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.145285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.145377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.145404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.145518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.145545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.145785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.145812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.145908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.145935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.146856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.146883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.147870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.147974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.148920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.148946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.149884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.149995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.150871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.150990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.151112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.151296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.151427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.151624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.151905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.151937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.152861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.152892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.153007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.153046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.153239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.153270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.153461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.153494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.153671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.153703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.720 [2024-12-13 06:42:24.153827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.720 [2024-12-13 06:42:24.153859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.720 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.153969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.154183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.154340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.154493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.154640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.154802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.154835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.155857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.155890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.156886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.156917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.157903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.157935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.158114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.158147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.158279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.158311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.158485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.158519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.158695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.158727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.158927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.158960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.159137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 Malloc0 00:36:32.721 [2024-12-13 06:42:24.159169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.159369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.159402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.159529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.159562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.159735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.159768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.159872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.159905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.160080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.160113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.160230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.160263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:32.721 [2024-12-13 06:42:24.160510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.160544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.721 [2024-12-13 06:42:24.160664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.160696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.160849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.160882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.721 [2024-12-13 06:42:24.161001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.161217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.161372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.161541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.161751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.161897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.161929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.162058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.162089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.162271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.162304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.162488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.721 [2024-12-13 06:42:24.162522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.721 qpair failed and we were unable to recover it. 00:36:32.721 [2024-12-13 06:42:24.162640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.162672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.162782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.162815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.162938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.162969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.163932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.163964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.164093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.164125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.164229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.164260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.164461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.164494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.164692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.164725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.164914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.164946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.165120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.165152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.165335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.165368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.165484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.165517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.165706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.165737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.165931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.165964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.166070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.166102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.166361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.166392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.166510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.166544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.166740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.166771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.166788] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:32.722 [2024-12-13 06:42:24.166975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.167892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.167924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.168104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.168135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.168381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.722 [2024-12-13 06:42:24.168414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.722 qpair failed and we were unable to recover it. 00:36:32.722 [2024-12-13 06:42:24.168561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.723 [2024-12-13 06:42:24.168595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.723 qpair failed and we were unable to recover it. 00:36:32.723 [2024-12-13 06:42:24.168746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.723 [2024-12-13 06:42:24.168778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.723 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.169867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.169899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.170146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.170178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.170290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.170321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.726 [2024-12-13 06:42:24.170593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.726 [2024-12-13 06:42:24.170626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.726 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.170762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.170794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.170913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.170944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.171079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.171111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.171350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.171382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.171600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.171634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.171755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.171787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.171895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.172064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.172095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.727 [2024-12-13 06:42:24.172388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.172421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:32.727 [2024-12-13 06:42:24.172601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.172634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.172765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.172797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.727 [2024-12-13 06:42:24.172913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.172945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.173125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.173158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.727 [2024-12-13 06:42:24.173400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.173432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.727 qpair failed and we were unable to recover it. 00:36:32.727 [2024-12-13 06:42:24.173580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.727 [2024-12-13 06:42:24.173612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.173717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.173749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.173931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.173963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.174112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.174144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.174250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.174281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.174467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.174525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7cc000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.174699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.174770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7c8000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.175110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.175333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.175492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.175639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.175783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.175971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.176003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.176199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.176231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.176356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.176388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.176574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.176747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.176779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.176968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.177001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.177141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.177172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.177359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.177392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.177622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.177656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.177826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.177858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.178903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.178935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.179124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.179156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.179284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.179316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.179419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.179458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12c8cd0 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.179601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.179636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.179833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.179865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.180059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.180091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.728 [2024-12-13 06:42:24.180327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.180360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 [2024-12-13 06:42:24.180546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.180579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.728 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:32.728 [2024-12-13 06:42:24.180707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.728 [2024-12-13 06:42:24.180739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.728 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.180841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.180873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.729 [2024-12-13 06:42:24.181004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.729 [2024-12-13 06:42:24.181210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.181366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.181532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.181680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.181828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.181863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.181987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.182018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.182242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.182274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.182513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.182546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.182741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.182772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.182956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.182987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.183209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.183240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.183430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.183472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.183658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.183688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.183818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.183848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.184087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.184119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.184295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.184326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.184444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.184486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.184601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.184633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.184846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.184879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.185054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.185085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.185360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.185392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.185615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.185647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.185828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.185859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.186864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.186895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.187114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.187257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.187400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.187562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.187806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.187982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.188015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.188200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.729 [2024-12-13 06:42:24.188232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.188335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.188366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 [2024-12-13 06:42:24.188527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.729 [2024-12-13 06:42:24.188562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.729 qpair failed and we were unable to recover it. 00:36:32.729 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:32.729 [2024-12-13 06:42:24.188752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.188784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.730 [2024-12-13 06:42:24.188950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.188982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.189097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.189128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.730 [2024-12-13 06:42:24.189305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.189338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.189528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.189562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.189683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.189715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.189816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.189849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.189972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.190110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.190258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.190469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.190620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.190769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.190800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.191039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.191071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.191249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.191280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.191495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.191527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.191763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:32.730 [2024-12-13 06:42:24.191795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7d4000b90 with addr=10.0.0.2, port=4420 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.191829] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:32.730 [2024-12-13 06:42:24.197509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.197650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.197695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.197727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.197750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.197814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:32.730 06:42:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1208764 00:36:32.730 [2024-12-13 06:42:24.207396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.207495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.207525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.207541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.207556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.207591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.217358] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.217424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.217443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.217459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.217469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.217491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.227320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.227388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.227406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.227413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.227420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.227437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.237367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.237439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.237457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.237464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.237471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.237486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.247383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.730 [2024-12-13 06:42:24.247470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.730 [2024-12-13 06:42:24.247483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.730 [2024-12-13 06:42:24.247489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.730 [2024-12-13 06:42:24.247495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.730 [2024-12-13 06:42:24.247510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.730 qpair failed and we were unable to recover it. 00:36:32.730 [2024-12-13 06:42:24.257390] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.257437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.257454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.257461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.257467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.257481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.267423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.267534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.267547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.267553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.267562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.267577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.277472] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.277525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.277538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.277544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.277550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.277564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.287484] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.287533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.287546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.287552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.287558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.287572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.297519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.297572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.297585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.297591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.297597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.297611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.307530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.307588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.307600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.307607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.307613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.307628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.731 [2024-12-13 06:42:24.317549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.731 [2024-12-13 06:42:24.317605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.731 [2024-12-13 06:42:24.317618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.731 [2024-12-13 06:42:24.317624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.731 [2024-12-13 06:42:24.317630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.731 [2024-12-13 06:42:24.317645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.731 qpair failed and we were unable to recover it. 00:36:32.995 [2024-12-13 06:42:24.327608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.995 [2024-12-13 06:42:24.327667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.995 [2024-12-13 06:42:24.327680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.995 [2024-12-13 06:42:24.327686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.995 [2024-12-13 06:42:24.327691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.995 [2024-12-13 06:42:24.327706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.995 qpair failed and we were unable to recover it. 00:36:32.995 [2024-12-13 06:42:24.337608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.995 [2024-12-13 06:42:24.337661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.995 [2024-12-13 06:42:24.337673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.995 [2024-12-13 06:42:24.337679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.995 [2024-12-13 06:42:24.337684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.995 [2024-12-13 06:42:24.337698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.995 qpair failed and we were unable to recover it. 00:36:32.995 [2024-12-13 06:42:24.347650] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.995 [2024-12-13 06:42:24.347720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.347732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.347739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.347745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.347758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.357664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.357714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.357729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.357735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.357741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.357756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.367694] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.367759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.367771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.367777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.367783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.367797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.377738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.377793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.377805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.377812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.377818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.377832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.387808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.387866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.387879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.387886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.387892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.387906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.397775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.397831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.397843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.397850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.397859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.397873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.407798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.407846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.407859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.407866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.407871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.407886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.417816] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.417875] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.417888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.417894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.417900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.417914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.427901] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.427958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.427970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.427977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.427982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.427996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.437878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.437938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.437951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.437957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.437963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.437977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.447820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.447879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.447891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.447898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.447904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.447918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.457924] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.457977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.457990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.457997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.458002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.458016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.467965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.468021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.468033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.468040] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.468046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.468060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.477992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.478048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.478060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.478067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.478072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.478086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.488064] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.488165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.488181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.488188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.488193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.488207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.498069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.498119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.498132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.996 [2024-12-13 06:42:24.498138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.996 [2024-12-13 06:42:24.498144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.996 [2024-12-13 06:42:24.498159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.996 qpair failed and we were unable to recover it. 00:36:32.996 [2024-12-13 06:42:24.508075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.996 [2024-12-13 06:42:24.508135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.996 [2024-12-13 06:42:24.508147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.508153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.508159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.508173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.518085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.518138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.518151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.518157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.518163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.518177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.528107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.528160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.528173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.528182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.528188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.528201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.538164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.538231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.538244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.538250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.538256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.538269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.548202] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.548258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.548270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.548276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.548282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.548295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.558214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.558269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.558282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.558288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.558294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.558307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.568237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.568286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.568299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.568305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.568310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.568327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.578276] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.578326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.578339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.578345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.578351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.578365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.588325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.588386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.588398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.588405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.588410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.588425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.598335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.598403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.598416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.598422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.598427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.598441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.608352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.608401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.608413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.608419] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.608425] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.608439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.618380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.618434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.618446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.618457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.618462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.618477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.628420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.628508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.628520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.628527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.628532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.628547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.638452] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:32.997 [2024-12-13 06:42:24.638510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:32.997 [2024-12-13 06:42:24.638524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:32.997 [2024-12-13 06:42:24.638531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:32.997 [2024-12-13 06:42:24.638537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:32.997 [2024-12-13 06:42:24.638551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:32.997 qpair failed and we were unable to recover it. 00:36:32.997 [2024-12-13 06:42:24.648459] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.648516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.648529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.648535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.648541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.648556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.257 [2024-12-13 06:42:24.658492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.658549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.658561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.658571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.658576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.658591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.257 [2024-12-13 06:42:24.668533] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.668588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.668601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.668607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.668612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.668627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.257 [2024-12-13 06:42:24.678521] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.678615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.678628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.678634] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.678640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.678654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.257 [2024-12-13 06:42:24.688634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.688691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.688703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.688709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.688715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.688729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.257 [2024-12-13 06:42:24.698675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.257 [2024-12-13 06:42:24.698762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.257 [2024-12-13 06:42:24.698774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.257 [2024-12-13 06:42:24.698780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.257 [2024-12-13 06:42:24.698786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.257 [2024-12-13 06:42:24.698803] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.257 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.708710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.708967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.708982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.708989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.708995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.709010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.718707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.718770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.718782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.718788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.718794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.718808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.728710] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.728757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.728770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.728776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.728781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.728795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.738765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.738821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.738834] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.738841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.738846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.738860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.748773] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.748828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.748840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.748847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.748853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.748867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.758803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.758866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.758878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.758884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.758889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.758903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.768832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.768884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.768897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.768903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.768909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.768923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.778896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.778949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.778961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.778968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.778974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.778988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.788879] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.788931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.788947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.788953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.788960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.788974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.798914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.798966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.798979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.798985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.798992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.799006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.808867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.808917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.808930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.808936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.808942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.808957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.818969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.819017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.819029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.819035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.819041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.819054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.829019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.258 [2024-12-13 06:42:24.829074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.258 [2024-12-13 06:42:24.829086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.258 [2024-12-13 06:42:24.829092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.258 [2024-12-13 06:42:24.829101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.258 [2024-12-13 06:42:24.829115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.258 qpair failed and we were unable to recover it. 00:36:33.258 [2024-12-13 06:42:24.839040] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.839107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.839119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.839125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.839131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.839145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.849052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.849105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.849118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.849124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.849130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.849143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.859122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.859173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.859186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.859192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.859197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.859211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.869121] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.869184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.869196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.869202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.869208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.869222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.879149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.879203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.879215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.879222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.879227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.879241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.889169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.889245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.889258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.889264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.889270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.889284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.899194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.899244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.899257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.899263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.899269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.899284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.259 [2024-12-13 06:42:24.909233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.259 [2024-12-13 06:42:24.909291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.259 [2024-12-13 06:42:24.909303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.259 [2024-12-13 06:42:24.909309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.259 [2024-12-13 06:42:24.909315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.259 [2024-12-13 06:42:24.909329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.259 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.919250] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.919336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.919352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.519 [2024-12-13 06:42:24.919359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.519 [2024-12-13 06:42:24.919364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.519 [2024-12-13 06:42:24.919378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.519 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.929313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.929365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.929377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.519 [2024-12-13 06:42:24.929384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.519 [2024-12-13 06:42:24.929389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.519 [2024-12-13 06:42:24.929404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.519 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.939293] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.939345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.939358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.519 [2024-12-13 06:42:24.939364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.519 [2024-12-13 06:42:24.939370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.519 [2024-12-13 06:42:24.939384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.519 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.949386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.949492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.949505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.519 [2024-12-13 06:42:24.949511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.519 [2024-12-13 06:42:24.949517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.519 [2024-12-13 06:42:24.949531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.519 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.959361] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.959416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.959429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.519 [2024-12-13 06:42:24.959435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.519 [2024-12-13 06:42:24.959443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.519 [2024-12-13 06:42:24.959462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.519 qpair failed and we were unable to recover it. 00:36:33.519 [2024-12-13 06:42:24.969428] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.519 [2024-12-13 06:42:24.969487] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.519 [2024-12-13 06:42:24.969500] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:24.969506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:24.969512] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:24.969525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:24.979402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:24.979461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:24.979474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:24.979480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:24.979486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:24.979500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:24.989495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:24.989550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:24.989563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:24.989569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:24.989575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:24.989589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:24.999529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:24.999589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:24.999601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:24.999608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:24.999613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:24.999628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.009606] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.009677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.009689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.009695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.009701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.009715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.019570] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.019623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.019635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.019641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.019647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.019661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.029603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.029659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.029671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.029677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.029683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.029696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.039648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.039726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.039739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.039745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.039751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.039764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.049584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.049689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.049701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.049707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.049713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.049726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.059632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.059686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.059698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.059704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.059710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.059723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.069647] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.069707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.069719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.069725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.069731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.069745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.079790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.079850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.079862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.079869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.079874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.079888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.089785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.089843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.089856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.089866] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.089871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.520 [2024-12-13 06:42:25.089886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.520 qpair failed and we were unable to recover it. 00:36:33.520 [2024-12-13 06:42:25.099800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.520 [2024-12-13 06:42:25.099854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.520 [2024-12-13 06:42:25.099867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.520 [2024-12-13 06:42:25.099873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.520 [2024-12-13 06:42:25.099879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.099893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.109806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.109861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.109873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.109879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.109885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.109899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.119795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.119850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.119863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.119869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.119875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.119889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.129911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.129967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.129980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.129986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.129992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.130009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.139892] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.139943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.139956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.139962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.139968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.139983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.149940] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.149999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.150011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.150017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.150023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.150037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.159991] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.160060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.160072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.160078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.160084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.160098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.521 [2024-12-13 06:42:25.169982] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.521 [2024-12-13 06:42:25.170033] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.521 [2024-12-13 06:42:25.170046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.521 [2024-12-13 06:42:25.170052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.521 [2024-12-13 06:42:25.170057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.521 [2024-12-13 06:42:25.170071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.521 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.180012] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.180064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.180077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.180083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.180088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.180104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.190056] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.190113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.190125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.190131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.190137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.190151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.200101] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.200159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.200172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.200178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.200184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.200198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.210087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.210140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.210152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.210158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.210164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.210177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.220126] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.220178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.220191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.220200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.220205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.220219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.230087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.230138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.230151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.230157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.230162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.230176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.240242] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.240304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.240316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.240323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.240328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.240342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.250201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.250284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.250297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.250303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.250308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.250322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.260233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.260286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.260298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.260305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.260310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.260327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.270193] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.270250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.270262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.270269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.270275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.270289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.280286] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.280338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.280352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.780 [2024-12-13 06:42:25.280358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.780 [2024-12-13 06:42:25.280363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.780 [2024-12-13 06:42:25.280377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.780 qpair failed and we were unable to recover it. 00:36:33.780 [2024-12-13 06:42:25.290320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.780 [2024-12-13 06:42:25.290371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.780 [2024-12-13 06:42:25.290383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.290389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.290395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.290409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.300316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.300370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.300382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.300388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.300394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.300408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.310306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.310383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.310397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.310403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.310409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.310423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.320319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.320383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.320395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.320401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.320407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.320421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.330432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.330492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.330505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.330511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.330517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.330531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.340478] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.340537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.340549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.340555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.340561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.340574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.350506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.350559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.350575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.350581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.350587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.350601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.360451] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.360508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.360519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.360526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.360531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.360545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.370578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.370632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.370645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.370651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.370657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.370671] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.380565] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.380618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.380630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.380636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.380642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.380656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.390611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.390691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.390705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.390711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.390720] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.390735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.400579] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.400637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.400650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.400657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.400663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.400678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.410655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.410713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.410725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.410732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.410737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.781 [2024-12-13 06:42:25.410751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.781 qpair failed and we were unable to recover it. 00:36:33.781 [2024-12-13 06:42:25.420699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.781 [2024-12-13 06:42:25.420800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.781 [2024-12-13 06:42:25.420812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.781 [2024-12-13 06:42:25.420818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.781 [2024-12-13 06:42:25.420824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.782 [2024-12-13 06:42:25.420838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.782 qpair failed and we were unable to recover it. 00:36:33.782 [2024-12-13 06:42:25.430721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:33.782 [2024-12-13 06:42:25.430778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:33.782 [2024-12-13 06:42:25.430791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:33.782 [2024-12-13 06:42:25.430797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:33.782 [2024-12-13 06:42:25.430803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:33.782 [2024-12-13 06:42:25.430816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:33.782 qpair failed and we were unable to recover it. 00:36:34.040 [2024-12-13 06:42:25.440763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.040 [2024-12-13 06:42:25.440821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.040 [2024-12-13 06:42:25.440833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.040 [2024-12-13 06:42:25.440839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.040 [2024-12-13 06:42:25.440845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.040 [2024-12-13 06:42:25.440860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.040 qpair failed and we were unable to recover it. 00:36:34.040 [2024-12-13 06:42:25.450767] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.040 [2024-12-13 06:42:25.450818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.040 [2024-12-13 06:42:25.450830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.040 [2024-12-13 06:42:25.450836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.040 [2024-12-13 06:42:25.450841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.040 [2024-12-13 06:42:25.450856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.040 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.460803] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.460852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.460864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.460870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.460875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.460889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.470863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.470926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.470939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.470945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.470951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.470965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.480789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.480892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.480908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.480915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.480920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.480935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.490881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.490930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.490942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.490948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.490954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.490969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.500824] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.500879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.500891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.500897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.500903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.500917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.510915] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.510971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.510984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.510990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.510995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.511009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.520898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.520972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.520984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.520990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.520999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.521013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.530931] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.530986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.530998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.531005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.531011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.531025] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.540948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.541009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.541021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.541028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.541033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.541047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.551070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.551130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.551143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.551149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.551155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.551169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.561014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.561069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.561082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.561088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.561094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.561108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.571024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.571083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.571096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.571102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.571108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.571122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.581052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.581110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.041 [2024-12-13 06:42:25.581123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.041 [2024-12-13 06:42:25.581129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.041 [2024-12-13 06:42:25.581135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.041 [2024-12-13 06:42:25.581149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.041 qpair failed and we were unable to recover it. 00:36:34.041 [2024-12-13 06:42:25.591149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.041 [2024-12-13 06:42:25.591205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.591218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.591224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.591230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.591244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.601171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.601226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.601238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.601244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.601250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.601265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.611192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.611250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.611263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.611269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.611275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.611289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.621258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.621311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.621323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.621329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.621335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.621349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.631233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.631290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.631303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.631309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.631315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.631329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.641281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.641338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.641351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.641357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.641363] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.641378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.651261] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.651360] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.651373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.651382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.651388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.651401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.661380] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.661441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.661460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.661466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.661472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.661486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.671352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.671442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.671460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.671467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.671472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.671486] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.681423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.681493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.681505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.681512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.681517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.681532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.042 [2024-12-13 06:42:25.691371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.042 [2024-12-13 06:42:25.691426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.042 [2024-12-13 06:42:25.691439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.042 [2024-12-13 06:42:25.691445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.042 [2024-12-13 06:42:25.691457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.042 [2024-12-13 06:42:25.691475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.042 qpair failed and we were unable to recover it. 00:36:34.302 [2024-12-13 06:42:25.701512] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.302 [2024-12-13 06:42:25.701558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.302 [2024-12-13 06:42:25.701571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.302 [2024-12-13 06:42:25.701577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.302 [2024-12-13 06:42:25.701582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.302 [2024-12-13 06:42:25.701596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.302 qpair failed and we were unable to recover it. 00:36:34.302 [2024-12-13 06:42:25.711506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.302 [2024-12-13 06:42:25.711560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.302 [2024-12-13 06:42:25.711572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.302 [2024-12-13 06:42:25.711579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.302 [2024-12-13 06:42:25.711585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.302 [2024-12-13 06:42:25.711598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.302 qpair failed and we were unable to recover it. 00:36:34.302 [2024-12-13 06:42:25.721581] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.302 [2024-12-13 06:42:25.721643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.302 [2024-12-13 06:42:25.721655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.302 [2024-12-13 06:42:25.721661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.302 [2024-12-13 06:42:25.721667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.302 [2024-12-13 06:42:25.721681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.302 qpair failed and we were unable to recover it. 00:36:34.302 [2024-12-13 06:42:25.731613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.302 [2024-12-13 06:42:25.731662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.302 [2024-12-13 06:42:25.731674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.302 [2024-12-13 06:42:25.731680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.302 [2024-12-13 06:42:25.731686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.302 [2024-12-13 06:42:25.731700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.302 qpair failed and we were unable to recover it. 00:36:34.302 [2024-12-13 06:42:25.741583] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.302 [2024-12-13 06:42:25.741638] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.302 [2024-12-13 06:42:25.741650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.302 [2024-12-13 06:42:25.741657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.302 [2024-12-13 06:42:25.741663] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.741677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.751567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.751625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.751637] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.751643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.751650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.751663] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.761652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.761705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.761717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.761724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.761729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.761743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.771664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.771722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.771734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.771740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.771746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.771760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.781697] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.781748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.781764] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.781771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.781777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.781790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.791746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.791800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.791812] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.791818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.791823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.791837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.801768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.801821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.801833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.801839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.801845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.801858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.811801] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.811850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.811862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.811868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.811874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.811887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.821853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.821912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.821924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.821930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.821936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.821955] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.831792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.831847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.831859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.831865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.831871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.831885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.841797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.841860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.841873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.841879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.841885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.841899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.851914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.851968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.851980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.851987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.851992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.852007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.861922] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.861973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.861986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.861993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.303 [2024-12-13 06:42:25.861999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.303 [2024-12-13 06:42:25.862013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.303 qpair failed and we were unable to recover it. 00:36:34.303 [2024-12-13 06:42:25.871959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.303 [2024-12-13 06:42:25.872046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.303 [2024-12-13 06:42:25.872059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.303 [2024-12-13 06:42:25.872065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.872071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.872085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.881984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.882037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.882050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.882056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.882062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.882076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.891936] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.891995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.892009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.892015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.892021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.892036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.902070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.902131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.902144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.902150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.902156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.902170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.912122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.912179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.912195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.912202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.912207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.912221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.922107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.922165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.922178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.922184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.922190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.922203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.932135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.932188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.932201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.932207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.932213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.932227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.942153] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.942207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.942220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.942227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.942233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.942247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.304 [2024-12-13 06:42:25.952204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.304 [2024-12-13 06:42:25.952256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.304 [2024-12-13 06:42:25.952268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.304 [2024-12-13 06:42:25.952275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.304 [2024-12-13 06:42:25.952283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.304 [2024-12-13 06:42:25.952297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.304 qpair failed and we were unable to recover it. 00:36:34.564 [2024-12-13 06:42:25.962244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.564 [2024-12-13 06:42:25.962294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.564 [2024-12-13 06:42:25.962307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.564 [2024-12-13 06:42:25.962313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.564 [2024-12-13 06:42:25.962318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.564 [2024-12-13 06:42:25.962333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.564 qpair failed and we were unable to recover it. 00:36:34.564 [2024-12-13 06:42:25.972253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.564 [2024-12-13 06:42:25.972304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.564 [2024-12-13 06:42:25.972316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.564 [2024-12-13 06:42:25.972323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.564 [2024-12-13 06:42:25.972328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.564 [2024-12-13 06:42:25.972343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.564 qpair failed and we were unable to recover it. 00:36:34.564 [2024-12-13 06:42:25.982272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.564 [2024-12-13 06:42:25.982322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.564 [2024-12-13 06:42:25.982334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.564 [2024-12-13 06:42:25.982340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.564 [2024-12-13 06:42:25.982346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.564 [2024-12-13 06:42:25.982361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.564 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:25.992291] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:25.992346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:25.992359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:25.992365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:25.992371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:25.992386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.002348] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.002406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.002419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.002425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.002431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.002445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.012367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.012416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.012429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.012435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.012441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.012460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.022398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.022459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.022472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.022478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.022483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.022498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.032433] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.032494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.032506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.032513] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.032518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.032533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.042457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.042510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.042525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.042532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.042537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.042551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.052475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.052530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.052543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.052549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.052554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.052568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.062503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.062575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.062587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.062593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.062598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.062612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.072501] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.072555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.072567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.072573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.072579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.072593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.082541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.082601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.082613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.082622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.082628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.082643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.092599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.092670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.092683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.092689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.092695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.092710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.102634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.102688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.102700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.102707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.102713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.102727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.112662] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.112754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.565 [2024-12-13 06:42:26.112767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.565 [2024-12-13 06:42:26.112773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.565 [2024-12-13 06:42:26.112779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.565 [2024-12-13 06:42:26.112793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.565 qpair failed and we were unable to recover it. 00:36:34.565 [2024-12-13 06:42:26.122675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.565 [2024-12-13 06:42:26.122726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.122739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.122745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.122751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.122765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.132718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.132774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.132787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.132793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.132799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.132813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.142742] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.142800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.142813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.142819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.142825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.142839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.152771] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.152827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.152840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.152846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.152852] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.152865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.162797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.162852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.162865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.162870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.162876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.162889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.172757] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.172838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.172851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.172857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.172863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.172876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.182868] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.182934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.182948] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.182954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.182959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.182973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.192907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.192960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.192973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.192980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.192986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.193000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.202943] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.203003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.203016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.203023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.203028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.203043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.566 [2024-12-13 06:42:26.212966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.566 [2024-12-13 06:42:26.213047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.566 [2024-12-13 06:42:26.213059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.566 [2024-12-13 06:42:26.213068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.566 [2024-12-13 06:42:26.213074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.566 [2024-12-13 06:42:26.213088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.566 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.222965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.223020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.223032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.223038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.223044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.223058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.233003] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.233060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.233073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.233079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.233085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.233099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.242946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.243002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.243015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.243021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.243026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.243040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.253091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.253148] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.253161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.253168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.253173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.253190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.263075] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.263128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.263140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.263147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.263152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.263166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.273152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.273249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.273262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.273268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.273273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.273287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.283171] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.283270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.283283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.283289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.283295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.283308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.293165] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.293216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.293228] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.293235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.293240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.293254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.303185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.303238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.303251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.303257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.303263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.303276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.313223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.313279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.313292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.313298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.313303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.313317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.826 qpair failed and we were unable to recover it. 00:36:34.826 [2024-12-13 06:42:26.323262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.826 [2024-12-13 06:42:26.323315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.826 [2024-12-13 06:42:26.323328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.826 [2024-12-13 06:42:26.323335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.826 [2024-12-13 06:42:26.323341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.826 [2024-12-13 06:42:26.323355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.333278] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.333334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.333347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.333354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.333359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.333373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.343300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.343354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.343370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.343376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.343382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.343396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.353325] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.353382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.353394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.353401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.353406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.353421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.363327] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.363419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.363431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.363437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.363443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.363461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.373378] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.373435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.373452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.373459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.373465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.373478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.383415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.383473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.383485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.383492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.383497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.383514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.393507] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.393594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.393608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.393614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.393619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.393634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.403494] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.403547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.403560] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.403565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.403571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.403586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.413488] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.413544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.413556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.413562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.413568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.413581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.423527] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.423580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.423592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.423598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.423604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.423618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.433562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.433617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.433629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.433635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.433641] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.433655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.443621] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.443682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.443694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.443701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.443707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.443720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.453620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.827 [2024-12-13 06:42:26.453684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.827 [2024-12-13 06:42:26.453696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.827 [2024-12-13 06:42:26.453702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.827 [2024-12-13 06:42:26.453708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.827 [2024-12-13 06:42:26.453722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.827 qpair failed and we were unable to recover it. 00:36:34.827 [2024-12-13 06:42:26.463648] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.828 [2024-12-13 06:42:26.463702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.828 [2024-12-13 06:42:26.463714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.828 [2024-12-13 06:42:26.463720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.828 [2024-12-13 06:42:26.463726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.828 [2024-12-13 06:42:26.463740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.828 qpair failed and we were unable to recover it. 00:36:34.828 [2024-12-13 06:42:26.473682] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:34.828 [2024-12-13 06:42:26.473738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:34.828 [2024-12-13 06:42:26.473753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:34.828 [2024-12-13 06:42:26.473759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:34.828 [2024-12-13 06:42:26.473765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:34.828 [2024-12-13 06:42:26.473779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:34.828 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.483704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.483754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.483767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.483773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.483778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.483792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.493701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.493792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.493804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.493810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.493816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.493829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.503804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.503862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.503875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.503881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.503886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.503900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.513821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.513919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.513932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.513937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.513946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.513961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.523804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.523860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.523872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.523878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.523884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.523898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.533860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.533913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.533925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.533931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.533937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.533951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.543863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.543912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.543925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.543931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.543936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.543950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.553904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.553957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.553969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.553975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.553980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.553994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.563971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.564032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.564045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.564051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.564057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.564071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.573950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.574004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.574017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.574022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.574028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.574042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.583984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.584036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.584048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.584054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.584060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.584074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.594010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.088 [2024-12-13 06:42:26.594063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.088 [2024-12-13 06:42:26.594076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.088 [2024-12-13 06:42:26.594082] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.088 [2024-12-13 06:42:26.594087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.088 [2024-12-13 06:42:26.594102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.088 qpair failed and we were unable to recover it. 00:36:35.088 [2024-12-13 06:42:26.604051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.604106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.604122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.604127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.604133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.604147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.614111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.614172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.614184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.614191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.614196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.614210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.624091] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.624143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.624156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.624162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.624168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.624181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.634129] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.634184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.634196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.634202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.634208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.634223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.644156] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.644249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.644261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.644271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.644276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.644291] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.654164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.654217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.654230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.654236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.654242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.654256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.664203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.664253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.664265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.664272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.664277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.664292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.674254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.674351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.674365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.674372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.674379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.674394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.684194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.684251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.684264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.684271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.684277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.684292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.694219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.694323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.694336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.694343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.694348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.694362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.704255] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.704309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.704322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.704328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.704334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.704348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.714367] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.714423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.714437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.714443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.714454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.714469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.724379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.724431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.089 [2024-12-13 06:42:26.724443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.089 [2024-12-13 06:42:26.724456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.089 [2024-12-13 06:42:26.724462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.089 [2024-12-13 06:42:26.724476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.089 qpair failed and we were unable to recover it. 00:36:35.089 [2024-12-13 06:42:26.734389] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.089 [2024-12-13 06:42:26.734445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.090 [2024-12-13 06:42:26.734463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.090 [2024-12-13 06:42:26.734469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.090 [2024-12-13 06:42:26.734475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.090 [2024-12-13 06:42:26.734489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.090 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.744354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.744411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.744424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.744430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.744436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.744456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.754539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.754597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.754609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.754616] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.754622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.754636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.764552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.764611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.764624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.764631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.764638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.764652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.774479] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.774537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.774550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.774559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.774565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.774579] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.784576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.784631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.784644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.784650] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.784656] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.784670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.794612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.794676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.794688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.794695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.794700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.794715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.804654] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.804717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.804730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.804736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.804741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.804755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.814658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.814718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.814731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.814736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.814742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.378 [2024-12-13 06:42:26.814759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.378 qpair failed and we were unable to recover it. 00:36:35.378 [2024-12-13 06:42:26.824649] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.378 [2024-12-13 06:42:26.824702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.378 [2024-12-13 06:42:26.824715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.378 [2024-12-13 06:42:26.824721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.378 [2024-12-13 06:42:26.824726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.824740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.834721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.834774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.834786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.834792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.834798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.834812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.844656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.844715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.844727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.844734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.844739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.844753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.854747] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.854802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.854814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.854820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.854825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.854839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.864769] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.864822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.864835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.864841] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.864847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.864861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.874730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.874787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.874800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.874806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.874811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.874825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.884845] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.884909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.884922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.884928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.884933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.884947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.894882] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.894932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.894945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.894952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.894958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.894972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.904863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.904918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.904934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.904941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.904946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.904961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.914867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.914949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.914961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.914968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.914974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.914988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.924884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.924965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.924978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.924985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.924992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.925006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.934911] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.934964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.934977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.934984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.934990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.935004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.945054] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.945106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.945119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.945125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.945134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.945148] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.379 [2024-12-13 06:42:26.955039] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.379 [2024-12-13 06:42:26.955096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.379 [2024-12-13 06:42:26.955108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.379 [2024-12-13 06:42:26.955115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.379 [2024-12-13 06:42:26.955120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.379 [2024-12-13 06:42:26.955134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.379 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:26.965024] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:26.965113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:26.965125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:26.965131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:26.965136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:26.965150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:26.975077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:26.975128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:26.975140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:26.975146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:26.975152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:26.975166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:26.985038] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:26.985094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:26.985107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:26.985113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:26.985119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:26.985133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:26.995181] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:26.995234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:26.995247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:26.995253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:26.995258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:26.995273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:27.005137] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:27.005225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:27.005237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:27.005243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:27.005249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:27.005262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.380 [2024-12-13 06:42:27.015297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.380 [2024-12-13 06:42:27.015354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.380 [2024-12-13 06:42:27.015367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.380 [2024-12-13 06:42:27.015373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.380 [2024-12-13 06:42:27.015379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.380 [2024-12-13 06:42:27.015393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.380 qpair failed and we were unable to recover it. 00:36:35.660 [2024-12-13 06:42:27.025300] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.660 [2024-12-13 06:42:27.025405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.660 [2024-12-13 06:42:27.025417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.660 [2024-12-13 06:42:27.025423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.660 [2024-12-13 06:42:27.025429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.660 [2024-12-13 06:42:27.025444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.660 qpair failed and we were unable to recover it. 00:36:35.660 [2024-12-13 06:42:27.035332] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.660 [2024-12-13 06:42:27.035388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.660 [2024-12-13 06:42:27.035404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.660 [2024-12-13 06:42:27.035410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.660 [2024-12-13 06:42:27.035416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.660 [2024-12-13 06:42:27.035429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.660 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.045357] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.045411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.045423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.045429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.045435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.045454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.055336] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.055397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.055409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.055416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.055421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.055435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.065347] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.065400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.065412] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.065418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.065424] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.065437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.075317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.075371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.075383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.075389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.075398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.075412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.085403] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.085461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.085475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.085481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.085487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.085501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.095343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.095436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.095456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.095463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.095468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.095483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.105462] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.105534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.105547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.105553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.105559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.105575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.115496] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.115572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.115586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.115595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.115603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.115621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.125547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.125606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.125618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.125625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.125631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.125645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.135536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.135588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.135601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.135607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.135613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.135627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.145566] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.145617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.145631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.145637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.145643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.145658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.155599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.155689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.155701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.155707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.155713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.661 [2024-12-13 06:42:27.155727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.661 qpair failed and we were unable to recover it. 00:36:35.661 [2024-12-13 06:42:27.165626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.661 [2024-12-13 06:42:27.165682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.661 [2024-12-13 06:42:27.165698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.661 [2024-12-13 06:42:27.165704] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.661 [2024-12-13 06:42:27.165709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.165723] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.175658] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.175707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.175720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.175726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.175732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.175746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.185686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.185737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.185749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.185755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.185761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.185775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.195717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.195775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.195788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.195795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.195800] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.195814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.205784] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.205842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.205855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.205864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.205870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.205885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.215759] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.215810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.215823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.215829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.215835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.215850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.225791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.225842] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.225855] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.225861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.225867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.225881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.235867] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.235947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.235960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.235966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.235972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.235986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.245853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.245939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.245951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.245958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.245963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.245977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.255917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.255986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.255999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.256004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.256010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.256024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.265912] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.265966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.265978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.265984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.265990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.266004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.275949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.276008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.276021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.276027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.276033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.276046] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.285978] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.286032] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.286044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.286050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.286056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.286070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.296023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.662 [2024-12-13 06:42:27.296085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.662 [2024-12-13 06:42:27.296097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.662 [2024-12-13 06:42:27.296104] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.662 [2024-12-13 06:42:27.296109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.662 [2024-12-13 06:42:27.296124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.662 qpair failed and we were unable to recover it. 00:36:35.662 [2024-12-13 06:42:27.306034] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.663 [2024-12-13 06:42:27.306088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.663 [2024-12-13 06:42:27.306100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.663 [2024-12-13 06:42:27.306106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.663 [2024-12-13 06:42:27.306112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.663 [2024-12-13 06:42:27.306126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.663 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.316070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.316128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.316140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.316147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.316153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.316166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.326104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.326162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.326174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.326180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.326186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.326200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.336123] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.336171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.336184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.336193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.336199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.336213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.346147] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.346196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.346209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.346215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.346221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.346236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.356194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.356251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.356264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.356270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.356276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.356290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.366209] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.366262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.366274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.366281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.366286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.366301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.376228] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.376283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.376296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.376302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.376308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.376325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.386258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.386335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.386348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.386354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.386360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.386374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.396223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.396277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.396291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.396298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.396303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.396318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.406320] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.406377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.406390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.406396] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.406402] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.406416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.416338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.923 [2024-12-13 06:42:27.416392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.923 [2024-12-13 06:42:27.416405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.923 [2024-12-13 06:42:27.416411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.923 [2024-12-13 06:42:27.416417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.923 [2024-12-13 06:42:27.416430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.923 qpair failed and we were unable to recover it. 00:36:35.923 [2024-12-13 06:42:27.426372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.426428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.426441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.426447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.426457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.426471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.436425] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.436485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.436498] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.436504] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.436509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.436523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.446439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.446501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.446514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.446521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.446526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.446541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.456480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.456531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.456544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.456549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.456555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.456570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.466554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.466640] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.466655] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.466661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.466667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.466681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.476554] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.476611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.476623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.476629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.476634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.476649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.486562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.486617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.486630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.486637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.486642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.486656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.496594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.496648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.496660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.496666] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.496672] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.496686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.506604] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.506661] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.506673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.506680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.506689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.506703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.516646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.516703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.516716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.516722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.516727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.516742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.526717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.526771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.526783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.526789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.526795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.526809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.536756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.536808] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.536821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.536827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.536833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.536847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.546764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.546828] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.924 [2024-12-13 06:42:27.546840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.924 [2024-12-13 06:42:27.546847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.924 [2024-12-13 06:42:27.546853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.924 [2024-12-13 06:42:27.546867] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.924 qpair failed and we were unable to recover it. 00:36:35.924 [2024-12-13 06:42:27.556775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.924 [2024-12-13 06:42:27.556830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.925 [2024-12-13 06:42:27.556842] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.925 [2024-12-13 06:42:27.556848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.925 [2024-12-13 06:42:27.556854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.925 [2024-12-13 06:42:27.556868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.925 qpair failed and we were unable to recover it. 00:36:35.925 [2024-12-13 06:42:27.566787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:35.925 [2024-12-13 06:42:27.566882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:35.925 [2024-12-13 06:42:27.566894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:35.925 [2024-12-13 06:42:27.566901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:35.925 [2024-12-13 06:42:27.566907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:35.925 [2024-12-13 06:42:27.566921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:35.925 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.576823] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.576877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.576890] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.576896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.576902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.576915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.586846] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.586894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.586906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.586912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.586918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.586932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.596898] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.596955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.596970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.596977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.596982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.596996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.606908] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.606969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.606981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.606988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.606993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.607008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.616935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.616989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.617001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.617008] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.617013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.617027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.626967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.627017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.627029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.627035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.627040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.627055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.636992] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.637049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.637061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.637068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.637077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.637091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.647019] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.647076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.647089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.647095] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.647101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.647115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.657077] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.657133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.657146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.657152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.657158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.657172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.667071] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.667124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.667136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.667143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.667149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.185 [2024-12-13 06:42:27.667162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.185 qpair failed and we were unable to recover it. 00:36:36.185 [2024-12-13 06:42:27.677107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.185 [2024-12-13 06:42:27.677164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.185 [2024-12-13 06:42:27.677176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.185 [2024-12-13 06:42:27.677183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.185 [2024-12-13 06:42:27.677188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.677202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.687159] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.687218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.687230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.687236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.687242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.687256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.697143] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.697195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.697207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.697213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.697219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.697234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.707182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.707233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.707245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.707251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.707258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.707271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.717229] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.717299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.717312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.717318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.717323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.717338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.727233] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.727293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.727308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.727315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.727320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.727334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.737271] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.737324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.737336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.737343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.737348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.737363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.747298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.747361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.747373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.747380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.747385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.747399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.757337] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.757395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.757408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.757414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.757420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.757433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.767356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.767434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.767446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.767459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.767464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.767478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.777424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.777513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.777526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.777532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.777537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.777552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.787440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.787493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.787506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.787512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.787518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.787532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.797379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.797436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.797454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.797460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.797466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.797480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.186 [2024-12-13 06:42:27.807398] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.186 [2024-12-13 06:42:27.807457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.186 [2024-12-13 06:42:27.807469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.186 [2024-12-13 06:42:27.807476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.186 [2024-12-13 06:42:27.807481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.186 [2024-12-13 06:42:27.807495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.186 qpair failed and we were unable to recover it. 00:36:36.187 [2024-12-13 06:42:27.817457] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.187 [2024-12-13 06:42:27.817515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.187 [2024-12-13 06:42:27.817527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.187 [2024-12-13 06:42:27.817534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.187 [2024-12-13 06:42:27.817540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.187 [2024-12-13 06:42:27.817554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.187 qpair failed and we were unable to recover it. 00:36:36.187 [2024-12-13 06:42:27.827523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.187 [2024-12-13 06:42:27.827588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.187 [2024-12-13 06:42:27.827600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.187 [2024-12-13 06:42:27.827606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.187 [2024-12-13 06:42:27.827612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.187 [2024-12-13 06:42:27.827627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.187 qpair failed and we were unable to recover it. 00:36:36.187 [2024-12-13 06:42:27.837568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.187 [2024-12-13 06:42:27.837623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.187 [2024-12-13 06:42:27.837635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.187 [2024-12-13 06:42:27.837641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.187 [2024-12-13 06:42:27.837646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.187 [2024-12-13 06:42:27.837660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.187 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.847643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.847699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.847711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.847717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.847722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.847737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.857614] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.857669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.857681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.857687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.857693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.857707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.867640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.867704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.867716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.867722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.867728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.867742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.877699] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.877756] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.877769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.877775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.877781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.877795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.887636] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.887691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.887703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.887709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.887715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.887729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.897736] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.897789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.897802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.897812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.897818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.897833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.907762] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.907815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.907827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.907834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.907839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.907853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.917798] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.917850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.917862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.917868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.917873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.917887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.927815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.927873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.927885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.927892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.927897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.927911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.937832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.937883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.937896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.937902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.937908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.937924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.947920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.947976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.947989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.947995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.948001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.447 [2024-12-13 06:42:27.948015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.447 qpair failed and we were unable to recover it. 00:36:36.447 [2024-12-13 06:42:27.957903] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.447 [2024-12-13 06:42:27.957958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.447 [2024-12-13 06:42:27.957970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.447 [2024-12-13 06:42:27.957976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.447 [2024-12-13 06:42:27.957982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:27.957996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:27.967928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:27.967999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:27.968011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:27.968017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:27.968023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:27.968036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:27.977963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:27.978011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:27.978024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:27.978030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:27.978035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:27.978049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:27.987999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:27.988047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:27.988060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:27.988066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:27.988072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:27.988086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:27.998053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:27.998114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:27.998127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:27.998134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:27.998140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:27.998154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.008078] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.008129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.008142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.008148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.008154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.008168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.017988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.018047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.018059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.018065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.018072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.018086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.028110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.028207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.028222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.028228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.028233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.028247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.038053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.038111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.038123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.038129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.038135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.038149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.048169] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.048225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.048237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.048243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.048249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.048262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.058134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.058183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.058195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.058201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.058207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.058221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.068272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.068329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.068341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.068347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.068356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.068371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.078254] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.078308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.078321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.078328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.078333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.078347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.088272] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.448 [2024-12-13 06:42:28.088328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.448 [2024-12-13 06:42:28.088341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.448 [2024-12-13 06:42:28.088347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.448 [2024-12-13 06:42:28.088353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.448 [2024-12-13 06:42:28.088368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.448 qpair failed and we were unable to recover it. 00:36:36.448 [2024-12-13 06:42:28.098306] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.449 [2024-12-13 06:42:28.098353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.449 [2024-12-13 06:42:28.098365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.449 [2024-12-13 06:42:28.098372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.449 [2024-12-13 06:42:28.098377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.449 [2024-12-13 06:42:28.098391] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.449 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.108323] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.108379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.108392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.108399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.108406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.108420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.118419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.118478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.118492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.118498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.118504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.118518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.128356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.128461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.128474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.128481] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.128487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.128501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.138351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.138404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.138417] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.138423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.138429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.138443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.148453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.148540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.148554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.148560] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.148565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.148580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.158431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.158494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.158510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.158516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.158522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.158536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.168492] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.168549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.168562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.168568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.168573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.168588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.178549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.178615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.178628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.178635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.178640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.178654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.188578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.188658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.188671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.188676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.188682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.188696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.198605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.198668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.198680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.198686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.198695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.198709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.208557] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.208650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.208663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.208669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.208674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.208688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.218664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.218718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.218730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.709 [2024-12-13 06:42:28.218736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.709 [2024-12-13 06:42:28.218742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.709 [2024-12-13 06:42:28.218756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.709 qpair failed and we were unable to recover it. 00:36:36.709 [2024-12-13 06:42:28.228600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.709 [2024-12-13 06:42:28.228655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.709 [2024-12-13 06:42:28.228667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.228673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.228679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.228693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.238720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.238778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.238793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.238800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.238806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.238825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.248731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.248784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.248797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.248803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.248809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.248823] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.258790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.258857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.258870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.258876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.258882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.258896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.268721] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.268772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.268785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.268791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.268797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.268811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.278774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.278854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.278866] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.278873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.278878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.278892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.288853] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.288908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.288924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.288930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.288935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.288949] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.298917] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.298993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.299006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.299012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.299018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.299032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.308820] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.308880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.308891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.308898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.308903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.308917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.318861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.318918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.318930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.318936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.318942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.318956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.328981] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.329044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.329056] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.329066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.329071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.329086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.338921] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.338974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.338986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.338992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.338998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.339012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.348953] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.349003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.349015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.349021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.349027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.710 [2024-12-13 06:42:28.349041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.710 qpair failed and we were unable to recover it. 00:36:36.710 [2024-12-13 06:42:28.358983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.710 [2024-12-13 06:42:28.359037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.710 [2024-12-13 06:42:28.359049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.710 [2024-12-13 06:42:28.359055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.710 [2024-12-13 06:42:28.359061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.711 [2024-12-13 06:42:28.359075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.711 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.369014] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.369109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.369122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.369129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.369135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.369152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.379052] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.379103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.379116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.379122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.379128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.379141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.389095] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.389161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.389175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.389181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.389187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.389201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.399182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.399255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.399268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.399274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.399279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.399294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.409160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.409230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.409243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.409249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.409256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.409270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.419210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.419270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.419283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.419289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.419295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.419309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.429168] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.429232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.429244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.429250] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.429256] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.429269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.439203] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.439258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.439271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.439277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.439283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.439297] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.449217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.449272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.449284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.449290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.449296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.449309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.459237] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.459300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.459312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.459322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.459327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.459342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.469270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.469328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.469341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.469347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.469352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.469366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.479333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.479392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.479405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.479412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.479417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.479431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.489413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.971 [2024-12-13 06:42:28.489470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.971 [2024-12-13 06:42:28.489483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.971 [2024-12-13 06:42:28.489489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.971 [2024-12-13 06:42:28.489495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.971 [2024-12-13 06:42:28.489509] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.971 qpair failed and we were unable to recover it. 00:36:36.971 [2024-12-13 06:42:28.499473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.499553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.499566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.499572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.499578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.499595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.509481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.509553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.509565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.509572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.509578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.509592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.519518] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.519577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.519589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.519596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.519602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.519616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.529475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.529530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.529542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.529548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.529554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.529568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.539544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.539594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.539606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.539612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.539619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.539633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.549578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.549675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.549687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.549694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.549699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.549713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.559545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.559603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.559615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.559621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.559627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.559641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.569638] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.569702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.569715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.569721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.569727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.569741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.579675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.579729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.579742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.579748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.579754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.579768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.589696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.589747] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.589762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.589768] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.589774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.589788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.599669] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.599764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.599776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.599782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.599788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.599802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.609764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.609820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.609833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.609838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.609844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.609858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:36.972 [2024-12-13 06:42:28.619792] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:36.972 [2024-12-13 06:42:28.619845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:36.972 [2024-12-13 06:42:28.619858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:36.972 [2024-12-13 06:42:28.619863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:36.972 [2024-12-13 06:42:28.619885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:36.972 [2024-12-13 06:42:28.619901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:36.972 qpair failed and we were unable to recover it. 00:36:37.232 [2024-12-13 06:42:28.629810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.232 [2024-12-13 06:42:28.629861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.232 [2024-12-13 06:42:28.629874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.232 [2024-12-13 06:42:28.629880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.232 [2024-12-13 06:42:28.629889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.232 [2024-12-13 06:42:28.629903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.232 qpair failed and we were unable to recover it. 00:36:37.232 [2024-12-13 06:42:28.639865] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.232 [2024-12-13 06:42:28.639927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.232 [2024-12-13 06:42:28.639940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.232 [2024-12-13 06:42:28.639947] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.232 [2024-12-13 06:42:28.639953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.232 [2024-12-13 06:42:28.639967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.232 qpair failed and we were unable to recover it. 00:36:37.232 [2024-12-13 06:42:28.649861] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.232 [2024-12-13 06:42:28.649913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.649925] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.649931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.649937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.649951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.659906] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.659958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.659971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.659977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.659983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.659996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.669858] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.669924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.669936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.669942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.669948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.669962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.680027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.680093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.680106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.680112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.680118] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.680132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.689996] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.690066] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.690078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.690084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.690090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.690104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.700021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.700076] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.700088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.700094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.700100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.700113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.709967] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.710019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.710031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.710037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.710042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.710056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.720076] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.720132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.720147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.720154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.720159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.720173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.730105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.730203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.730215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.730221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.730226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.730241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.740132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.740183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.740195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.740201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.740207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.740221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.750172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.750225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.750237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.750242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.750248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.750262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.760201] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.760269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.760282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.760288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.760296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.760310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.770220] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.770274] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.770287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.770293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.770298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.770312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.780214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.780268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.780281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.780288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.780294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.780307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.790283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.790337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.790349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.790355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.790361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.790375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.800328] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.800383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.800396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.800402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.800408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.800422] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.810338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.810393] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.810405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.233 [2024-12-13 06:42:28.810411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.233 [2024-12-13 06:42:28.810417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.233 [2024-12-13 06:42:28.810431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.233 qpair failed and we were unable to recover it. 00:36:37.233 [2024-12-13 06:42:28.820303] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.233 [2024-12-13 06:42:28.820358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.233 [2024-12-13 06:42:28.820371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.820377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.820383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.820397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.830386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.830441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.830458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.830464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.830469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.830484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.840432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.840491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.840503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.840510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.840516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.840530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.850463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.850520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.850535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.850542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.850547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.850561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.860477] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.860530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.860543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.860549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.860554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.860568] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.870502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.870554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.870566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.870572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.870578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.870592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.234 [2024-12-13 06:42:28.880545] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.234 [2024-12-13 06:42:28.880598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.234 [2024-12-13 06:42:28.880611] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.234 [2024-12-13 06:42:28.880617] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.234 [2024-12-13 06:42:28.880622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.234 [2024-12-13 06:42:28.880637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.234 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.890564] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.890618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.890631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.890641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.890646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.890661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.900515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.900568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.900580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.900587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.900592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.900606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.910603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.910690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.910704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.910710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.910716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.910731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.920675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.920740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.920753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.920759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.920765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.920779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.930692] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.930746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.930759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.930765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.930771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.930788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.940717] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.940769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.940781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.940788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.940793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.940807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.950795] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.950850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.950862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.950868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.950874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.950888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.960809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.960864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.960876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.960883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.960888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.960902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.970797] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.970853] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.970865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.970871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.970877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.970891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.980817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.980917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.980930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.493 [2024-12-13 06:42:28.980936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.493 [2024-12-13 06:42:28.980942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.493 [2024-12-13 06:42:28.980956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.493 qpair failed and we were unable to recover it. 00:36:37.493 [2024-12-13 06:42:28.990880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.493 [2024-12-13 06:42:28.990927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.493 [2024-12-13 06:42:28.990939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:28.990945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:28.990951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:28.990965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.000894] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.000947] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.000960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.000966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.000973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.000987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.010969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.011071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.011083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.011089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.011095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.011109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.020923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.021124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.021139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.021148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.021154] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.021169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.030966] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.031011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.031024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.031030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.031036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.031050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.041008] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.041060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.041073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.041079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.041085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.041098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.051073] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.051136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.051148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.051154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.051160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.051173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.061060] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.061114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.061126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.061133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.061138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.061155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.071079] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.071136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.071149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.071155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.071161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.071174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.081110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.081167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.081180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.081186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.081192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.081206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.091132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.091183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.091195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.091202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.091208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.091222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.101182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.101236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.101248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.101255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.101261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.101274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.111249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.111344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.111357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.111363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.111369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.111383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.121238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.121292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.121305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.121311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.121316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.121330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.131194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.131250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.131262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.131269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.131274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.131289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.494 [2024-12-13 06:42:29.141304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.494 [2024-12-13 06:42:29.141361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.494 [2024-12-13 06:42:29.141374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.494 [2024-12-13 06:42:29.141381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.494 [2024-12-13 06:42:29.141387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.494 [2024-12-13 06:42:29.141402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.494 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.151339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.151394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.151410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.151416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.151422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.151436] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.161397] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.161459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.161472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.161478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.161484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.161498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.171371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.171426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.171438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.171444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.171455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.171469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.181395] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.181465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.181478] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.181484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.181490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.181504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.191416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.191471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.191484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.191490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.191499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.191514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.201506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.201601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.201614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.201620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.201626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.201640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.211506] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.211576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.211590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.211596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.211602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.211617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.221515] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.221567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.221580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.221586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.221592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.221606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.231470] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.231525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.231538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.231544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.231550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.231564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.241588] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.241667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.241679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.241686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.241692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.241707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.251611] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.251665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.251677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.251683] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.251689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.251703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.261640] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.261694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.261707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.261713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.261719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.261732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.271667] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.271758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.271770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.271776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.271781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.271796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.281683] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.281739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.281754] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.281760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.753 [2024-12-13 06:42:29.281766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.753 [2024-12-13 06:42:29.281780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.753 qpair failed and we were unable to recover it. 00:36:37.753 [2024-12-13 06:42:29.291748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.753 [2024-12-13 06:42:29.291801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.753 [2024-12-13 06:42:29.291813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.753 [2024-12-13 06:42:29.291819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.291825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.291839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.301787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.301844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.301856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.301862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.301868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.301882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.311772] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.311827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.311839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.311846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.311851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.311865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.321807] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.321863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.321876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.321882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.321892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.321906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.331850] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.331909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.331921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.331928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.331933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.331947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.341844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.341896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.341908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.341914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.341920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.341934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.351878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.351929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.351942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.351948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.351953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.351967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.361920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.361973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.361985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.361991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.361997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.362011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.371933] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.371984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.371996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.372003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.372008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.372022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.381961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.382015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.382028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.382034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.382039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.382054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.391923] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.391980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.391994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.392000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.392005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.392020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:37.754 [2024-12-13 06:42:29.402041] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:37.754 [2024-12-13 06:42:29.402101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:37.754 [2024-12-13 06:42:29.402114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:37.754 [2024-12-13 06:42:29.402120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:37.754 [2024-12-13 06:42:29.402126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:37.754 [2024-12-13 06:42:29.402140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:37.754 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.412057] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.014 [2024-12-13 06:42:29.412114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.014 [2024-12-13 06:42:29.412129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.014 [2024-12-13 06:42:29.412136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.014 [2024-12-13 06:42:29.412142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.014 [2024-12-13 06:42:29.412156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.014 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.422086] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.014 [2024-12-13 06:42:29.422137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.014 [2024-12-13 06:42:29.422149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.014 [2024-12-13 06:42:29.422155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.014 [2024-12-13 06:42:29.422161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.014 [2024-12-13 06:42:29.422174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.014 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.432150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.014 [2024-12-13 06:42:29.432204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.014 [2024-12-13 06:42:29.432216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.014 [2024-12-13 06:42:29.432223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.014 [2024-12-13 06:42:29.432228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.014 [2024-12-13 06:42:29.432243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.014 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.442172] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.014 [2024-12-13 06:42:29.442227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.014 [2024-12-13 06:42:29.442240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.014 [2024-12-13 06:42:29.442246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.014 [2024-12-13 06:42:29.442252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.014 [2024-12-13 06:42:29.442266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.014 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.452141] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.014 [2024-12-13 06:42:29.452204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.014 [2024-12-13 06:42:29.452217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.014 [2024-12-13 06:42:29.452226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.014 [2024-12-13 06:42:29.452232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.014 [2024-12-13 06:42:29.452246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.014 qpair failed and we were unable to recover it. 00:36:38.014 [2024-12-13 06:42:29.462214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.462278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.462290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.462297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.462302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.462317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.472225] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.472295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.472308] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.472314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.472320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.472335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.482269] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.482336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.482349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.482355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.482361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.482375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.492279] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.492335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.492347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.492353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.492359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.492377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.502270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.502326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.502339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.502345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.502351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.502365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.512339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.512440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.512457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.512464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.512469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.512483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.522437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.522540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.522553] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.522559] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.522564] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.522578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.532427] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.532483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.532495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.532501] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.532507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.532521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.542356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.542414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.542427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.542433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.542439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.542457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.552463] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.552515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.552528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.552534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.552540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.552554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.562526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.562582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.562594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.562600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.562606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.562620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.572517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.572577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.572590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.572596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.572602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.572616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.582553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.582626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.582639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.015 [2024-12-13 06:42:29.582648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.015 [2024-12-13 06:42:29.582654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.015 [2024-12-13 06:42:29.582668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.015 qpair failed and we were unable to recover it. 00:36:38.015 [2024-12-13 06:42:29.592561] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.015 [2024-12-13 06:42:29.592615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.015 [2024-12-13 06:42:29.592627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.592633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.592639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.592653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.602550] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.602630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.602642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.602649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.602654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.602669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.612679] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.612735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.612749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.612756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.612762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.612776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.622739] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.622805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.622817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.622824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.622830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.622847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.632665] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.632717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.632730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.632736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.632742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.632756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.642705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.642767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.642781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.642788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.642794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.642808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.652765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.652820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.652833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.652840] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.652846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.652860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.016 [2024-12-13 06:42:29.662704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.016 [2024-12-13 06:42:29.662754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.016 [2024-12-13 06:42:29.662766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.016 [2024-12-13 06:42:29.662773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.016 [2024-12-13 06:42:29.662779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.016 [2024-12-13 06:42:29.662793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.016 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.672750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.672803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.672816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.672823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.276 [2024-12-13 06:42:29.672829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.276 [2024-12-13 06:42:29.672843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.276 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.682866] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.682933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.682946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.682952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.276 [2024-12-13 06:42:29.682958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.276 [2024-12-13 06:42:29.682973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.276 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.692872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.692940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.692953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.692959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.276 [2024-12-13 06:42:29.692965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.276 [2024-12-13 06:42:29.692980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.276 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.702841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.702896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.702909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.702915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.276 [2024-12-13 06:42:29.702921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.276 [2024-12-13 06:42:29.702935] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.276 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.712927] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.712973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.712989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.712996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.276 [2024-12-13 06:42:29.713002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.276 [2024-12-13 06:42:29.713017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.276 qpair failed and we were unable to recover it. 00:36:38.276 [2024-12-13 06:42:29.722896] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.276 [2024-12-13 06:42:29.722949] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.276 [2024-12-13 06:42:29.722961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.276 [2024-12-13 06:42:29.722968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.722974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.722989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.733007] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.733062] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.733075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.733081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.733088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.733102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.742969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.743057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.743070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.743077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.743083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.743098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.752983] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.753030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.753045] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.753052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.753061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.753076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.763016] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.763071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.763084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.763091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.763096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.763111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.773117] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.773175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.773188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.773195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.773201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.773216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.783105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.783159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.783172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.783179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.783185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.783200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.793170] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.793221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.793234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.793241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.793247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.793261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.803212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.803266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.803280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.803287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.803293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.803307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.813240] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.813315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.813329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.813336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.813342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.813357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.823258] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.823326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.823338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.823345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.823351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.823366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.833262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.833330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.833343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.833349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.833356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.833370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.843343] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.843400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.843415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.843421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.843427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.843442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.277 [2024-12-13 06:42:29.853321] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.277 [2024-12-13 06:42:29.853376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.277 [2024-12-13 06:42:29.853388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.277 [2024-12-13 06:42:29.853394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.277 [2024-12-13 06:42:29.853400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.277 [2024-12-13 06:42:29.853415] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.277 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.863288] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.863342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.863355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.863362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.863368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.863382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.873387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.873438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.873456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.873463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.873469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.873484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.883414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.883476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.883490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.883496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.883505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.883521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.893383] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.893460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.893475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.893482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.893488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.893503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.903413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.903469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.903481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.903487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.903493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.903507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.913532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.913585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.913600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.913606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.913612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.913626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.278 [2024-12-13 06:42:29.923534] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.278 [2024-12-13 06:42:29.923588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.278 [2024-12-13 06:42:29.923601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.278 [2024-12-13 06:42:29.923608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.278 [2024-12-13 06:42:29.923613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.278 [2024-12-13 06:42:29.923628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.278 qpair failed and we were unable to recover it. 00:36:38.537 [2024-12-13 06:42:29.933556] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.537 [2024-12-13 06:42:29.933609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.537 [2024-12-13 06:42:29.933622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.537 [2024-12-13 06:42:29.933628] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.537 [2024-12-13 06:42:29.933634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.537 [2024-12-13 06:42:29.933649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.537 qpair failed and we were unable to recover it. 00:36:38.537 [2024-12-13 06:42:29.943530] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.537 [2024-12-13 06:42:29.943605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.537 [2024-12-13 06:42:29.943618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.537 [2024-12-13 06:42:29.943625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.537 [2024-12-13 06:42:29.943631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.537 [2024-12-13 06:42:29.943645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.537 qpair failed and we were unable to recover it. 00:36:38.537 [2024-12-13 06:42:29.953603] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.537 [2024-12-13 06:42:29.953651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.537 [2024-12-13 06:42:29.953664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.537 [2024-12-13 06:42:29.953670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.537 [2024-12-13 06:42:29.953676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.537 [2024-12-13 06:42:29.953690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.537 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:29.963635] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:29.963693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:29.963705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:29.963711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:29.963717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:29.963730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:29.973645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:29.973720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:29.973733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:29.973739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:29.973744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:29.973758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:29.983726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:29.983780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:29.983793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:29.983798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:29.983804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:29.983818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:29.993707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:29.993759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:29.993772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:29.993778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:29.993784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:29.993797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.003756] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.003811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.003824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.003831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.003837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.003852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.013802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.013880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.013897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.013908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.013914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.013931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.023809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.023865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.023878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.023884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.023890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.023905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.033909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.033968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.033983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.033990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.033996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.034010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.043870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.043928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.043941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.043948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.043954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.043968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.053890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.053975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.053988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.053994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.054000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7c8000b90 00:36:38.538 [2024-12-13 06:42:30.054017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.064002] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.064087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.064129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.064149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.064165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7cc000b90 00:36:38.538 [2024-12-13 06:42:30.064205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.074029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:38.538 [2024-12-13 06:42:30.074107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:38.538 [2024-12-13 06:42:30.074128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:38.538 [2024-12-13 06:42:30.074139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:38.538 [2024-12-13 06:42:30.074149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7cc000b90 00:36:38.538 [2024-12-13 06:42:30.074173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:38.538 qpair failed and we were unable to recover it. 00:36:38.538 [2024-12-13 06:42:30.074283] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:38.538 A controller has encountered a failure and is being reset. 00:36:38.538 Controller properly reset. 00:36:38.797 Initializing NVMe Controllers 00:36:38.797 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.797 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:38.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:38.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:38.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:38.798 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:38.798 Initialization complete. Launching workers. 00:36:38.798 Starting thread on core 1 00:36:38.798 Starting thread on core 2 00:36:38.798 Starting thread on core 3 00:36:38.798 Starting thread on core 0 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:38.798 00:36:38.798 real 0m10.779s 00:36:38.798 user 0m19.447s 00:36:38.798 sys 0m4.800s 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:38.798 ************************************ 00:36:38.798 END TEST nvmf_target_disconnect_tc2 00:36:38.798 ************************************ 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.798 rmmod nvme_tcp 00:36:38.798 rmmod nvme_fabrics 00:36:38.798 rmmod nvme_keyring 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 1209280 ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 1209280 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1209280 ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1209280 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1209280 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1209280' 00:36:38.798 killing process with pid 1209280 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1209280 00:36:38.798 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1209280 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:39.057 06:42:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.961 06:42:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.961 00:36:40.961 real 0m19.464s 00:36:40.961 user 0m47.085s 00:36:40.961 sys 0m9.699s 00:36:40.961 06:42:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.961 06:42:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:40.961 ************************************ 00:36:40.961 END TEST nvmf_target_disconnect 00:36:40.961 ************************************ 00:36:41.220 06:42:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:36:41.220 00:36:41.220 real 7m20.998s 00:36:41.220 user 16m48.459s 00:36:41.220 sys 2m8.652s 00:36:41.220 06:42:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:41.220 06:42:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.220 ************************************ 00:36:41.220 END TEST nvmf_host 00:36:41.220 ************************************ 00:36:41.220 06:42:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:36:41.220 06:42:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:36:41.220 06:42:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:41.220 06:42:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.220 06:42:32 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.220 06:42:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:36:41.220 ************************************ 00:36:41.220 START TEST nvmf_target_core_interrupt_mode 00:36:41.220 ************************************ 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:36:41.220 * Looking for test storage... 00:36:41.220 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:41.220 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.480 --rc genhtml_branch_coverage=1 00:36:41.480 --rc genhtml_function_coverage=1 00:36:41.480 --rc genhtml_legend=1 00:36:41.480 --rc geninfo_all_blocks=1 00:36:41.480 --rc geninfo_unexecuted_blocks=1 00:36:41.480 00:36:41.480 ' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.480 --rc genhtml_branch_coverage=1 00:36:41.480 --rc genhtml_function_coverage=1 00:36:41.480 --rc genhtml_legend=1 00:36:41.480 --rc geninfo_all_blocks=1 00:36:41.480 --rc geninfo_unexecuted_blocks=1 00:36:41.480 00:36:41.480 ' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.480 --rc genhtml_branch_coverage=1 00:36:41.480 --rc genhtml_function_coverage=1 00:36:41.480 --rc genhtml_legend=1 00:36:41.480 --rc geninfo_all_blocks=1 00:36:41.480 --rc geninfo_unexecuted_blocks=1 00:36:41.480 00:36:41.480 ' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:41.480 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.480 --rc genhtml_branch_coverage=1 00:36:41.480 --rc genhtml_function_coverage=1 00:36:41.480 --rc genhtml_legend=1 00:36:41.480 --rc geninfo_all_blocks=1 00:36:41.480 --rc geninfo_unexecuted_blocks=1 00:36:41.480 00:36:41.480 ' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.480 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:41.481 ************************************ 00:36:41.481 START TEST nvmf_abort 00:36:41.481 ************************************ 00:36:41.481 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:36:41.481 * Looking for test storage... 00:36:41.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.481 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:41.481 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.481 --rc genhtml_branch_coverage=1 00:36:41.481 --rc genhtml_function_coverage=1 00:36:41.481 --rc genhtml_legend=1 00:36:41.481 --rc geninfo_all_blocks=1 00:36:41.481 --rc geninfo_unexecuted_blocks=1 00:36:41.481 00:36:41.481 ' 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:41.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.741 --rc genhtml_branch_coverage=1 00:36:41.741 --rc genhtml_function_coverage=1 00:36:41.741 --rc genhtml_legend=1 00:36:41.741 --rc geninfo_all_blocks=1 00:36:41.741 --rc geninfo_unexecuted_blocks=1 00:36:41.741 00:36:41.741 ' 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:41.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.741 --rc genhtml_branch_coverage=1 00:36:41.741 --rc genhtml_function_coverage=1 00:36:41.741 --rc genhtml_legend=1 00:36:41.741 --rc geninfo_all_blocks=1 00:36:41.741 --rc geninfo_unexecuted_blocks=1 00:36:41.741 00:36:41.741 ' 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:41.741 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.741 --rc genhtml_branch_coverage=1 00:36:41.741 --rc genhtml_function_coverage=1 00:36:41.741 --rc genhtml_legend=1 00:36:41.741 --rc geninfo_all_blocks=1 00:36:41.741 --rc geninfo_unexecuted_blocks=1 00:36:41.741 00:36:41.741 ' 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.741 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.742 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:36:48.311 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:48.312 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:48.312 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:48.312 Found net devices under 0000:af:00.0: cvl_0_0 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:48.312 Found net devices under 0000:af:00.1: cvl_0_1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:48.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:48.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:36:48.312 00:36:48.312 --- 10.0.0.2 ping statistics --- 00:36:48.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.312 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:48.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:48.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:36:48.312 00:36:48.312 --- 10.0.0.1 ping statistics --- 00:36:48.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:48.312 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:48.312 06:42:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:48.312 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1213940 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1213940 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1213940 ']' 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:48.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 [2024-12-13 06:42:39.084048] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:48.313 [2024-12-13 06:42:39.085027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:48.313 [2024-12-13 06:42:39.085069] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:48.313 [2024-12-13 06:42:39.163851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:48.313 [2024-12-13 06:42:39.186676] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:48.313 [2024-12-13 06:42:39.186712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:48.313 [2024-12-13 06:42:39.186719] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:48.313 [2024-12-13 06:42:39.186725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:48.313 [2024-12-13 06:42:39.186730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:48.313 [2024-12-13 06:42:39.188030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:48.313 [2024-12-13 06:42:39.188140] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:48.313 [2024-12-13 06:42:39.188142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:48.313 [2024-12-13 06:42:39.250952] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:48.313 [2024-12-13 06:42:39.251790] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:48.313 [2024-12-13 06:42:39.252178] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:48.313 [2024-12-13 06:42:39.252286] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 [2024-12-13 06:42:39.316989] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 Malloc0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 Delay0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 [2024-12-13 06:42:39.404856] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:48.313 06:42:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:36:48.313 [2024-12-13 06:42:39.574598] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:36:50.218 Initializing NVMe Controllers 00:36:50.218 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:36:50.218 controller IO queue size 128 less than required 00:36:50.218 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:36:50.218 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:36:50.218 Initialization complete. Launching workers. 00:36:50.218 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37864 00:36:50.218 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37921, failed to submit 66 00:36:50.218 success 37864, unsuccessful 57, failed 0 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:50.218 rmmod nvme_tcp 00:36:50.218 rmmod nvme_fabrics 00:36:50.218 rmmod nvme_keyring 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1213940 ']' 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1213940 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1213940 ']' 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1213940 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1213940 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1213940' 00:36:50.218 killing process with pid 1213940 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1213940 00:36:50.218 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1213940 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:50.478 06:42:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:53.013 00:36:53.013 real 0m11.090s 00:36:53.013 user 0m10.548s 00:36:53.013 sys 0m5.688s 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:36:53.013 ************************************ 00:36:53.013 END TEST nvmf_abort 00:36:53.013 ************************************ 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:36:53.013 ************************************ 00:36:53.013 START TEST nvmf_ns_hotplug_stress 00:36:53.013 ************************************ 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:36:53.013 * Looking for test storage... 00:36:53.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:53.013 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:53.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.014 --rc genhtml_branch_coverage=1 00:36:53.014 --rc genhtml_function_coverage=1 00:36:53.014 --rc genhtml_legend=1 00:36:53.014 --rc geninfo_all_blocks=1 00:36:53.014 --rc geninfo_unexecuted_blocks=1 00:36:53.014 00:36:53.014 ' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:53.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.014 --rc genhtml_branch_coverage=1 00:36:53.014 --rc genhtml_function_coverage=1 00:36:53.014 --rc genhtml_legend=1 00:36:53.014 --rc geninfo_all_blocks=1 00:36:53.014 --rc geninfo_unexecuted_blocks=1 00:36:53.014 00:36:53.014 ' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:53.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.014 --rc genhtml_branch_coverage=1 00:36:53.014 --rc genhtml_function_coverage=1 00:36:53.014 --rc genhtml_legend=1 00:36:53.014 --rc geninfo_all_blocks=1 00:36:53.014 --rc geninfo_unexecuted_blocks=1 00:36:53.014 00:36:53.014 ' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:53.014 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:53.014 --rc genhtml_branch_coverage=1 00:36:53.014 --rc genhtml_function_coverage=1 00:36:53.014 --rc genhtml_legend=1 00:36:53.014 --rc geninfo_all_blocks=1 00:36:53.014 --rc geninfo_unexecuted_blocks=1 00:36:53.014 00:36:53.014 ' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:53.014 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:36:53.015 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:36:58.359 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:58.360 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:58.360 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:58.360 Found net devices under 0000:af:00.0: cvl_0_0 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:58.360 Found net devices under 0000:af:00.1: cvl_0_1 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:58.360 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:58.619 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:58.620 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:58.620 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.289 ms 00:36:58.620 00:36:58.620 --- 10.0.0.2 ping statistics --- 00:36:58.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.620 rtt min/avg/max/mdev = 0.289/0.289/0.289/0.000 ms 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:58.620 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:58.620 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:36:58.620 00:36:58.620 --- 10.0.0.1 ping statistics --- 00:36:58.620 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:58.620 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1217859 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1217859 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1217859 ']' 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:58.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:58.620 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:58.620 [2024-12-13 06:42:50.256067] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:36:58.620 [2024-12-13 06:42:50.256964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:36:58.620 [2024-12-13 06:42:50.256998] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:58.879 [2024-12-13 06:42:50.334954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:58.879 [2024-12-13 06:42:50.356710] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:58.879 [2024-12-13 06:42:50.356744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:58.879 [2024-12-13 06:42:50.356752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:58.879 [2024-12-13 06:42:50.356758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:58.879 [2024-12-13 06:42:50.356763] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:58.879 [2024-12-13 06:42:50.357965] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:58.879 [2024-12-13 06:42:50.358070] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.879 [2024-12-13 06:42:50.358071] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:58.879 [2024-12-13 06:42:50.420303] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:36:58.879 [2024-12-13 06:42:50.421049] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:36:58.879 [2024-12-13 06:42:50.421318] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:36:58.879 [2024-12-13 06:42:50.421472] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:36:58.879 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:36:59.138 [2024-12-13 06:42:50.658865] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:59.138 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:36:59.396 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:59.396 [2024-12-13 06:42:51.051274] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:59.655 06:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:59.655 06:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:36:59.914 Malloc0 00:36:59.914 06:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:00.173 Delay0 00:37:00.173 06:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:00.431 06:42:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:00.431 NULL1 00:37:00.431 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:00.690 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1218117 00:37:00.690 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:00.690 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:00.690 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:02.069 Read completed with error (sct=0, sc=11) 00:37:02.069 06:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:02.069 06:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:02.069 06:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:02.327 true 00:37:02.327 06:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:02.327 06:42:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.262 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:03.262 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:03.262 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:03.521 true 00:37:03.521 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:03.521 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:03.779 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:04.037 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:04.037 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:04.037 true 00:37:04.037 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:04.037 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 06:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:05.414 06:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:05.414 06:42:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:05.672 true 00:37:05.672 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:05.672 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:06.240 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.498 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:06.498 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:06.498 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:06.498 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:06.757 true 00:37:06.757 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:06.757 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:07.016 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:07.274 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:07.274 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:07.274 true 00:37:07.533 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:07.533 06:42:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:08.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.469 06:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:08.469 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:08.727 06:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:08.727 06:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:08.985 true 00:37:08.985 06:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:08.985 06:43:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:09.921 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:09.921 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:09.921 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:10.179 true 00:37:10.179 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:10.179 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:10.437 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:10.696 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:10.696 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:10.696 true 00:37:10.696 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:10.696 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 06:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:12.073 06:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:12.073 06:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:12.332 true 00:37:12.332 06:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:12.332 06:43:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.267 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:13.267 06:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:13.267 06:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:13.267 06:43:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:13.526 true 00:37:13.526 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:13.526 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:13.784 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:14.041 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:14.041 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:14.041 true 00:37:14.041 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:14.041 06:43:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 06:43:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:15.418 06:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:15.418 06:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:15.676 true 00:37:15.676 06:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:15.677 06:43:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:16.612 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:16.612 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:16.612 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:16.871 true 00:37:16.871 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:16.871 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:17.130 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:17.389 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:17.389 06:43:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:17.389 true 00:37:17.648 06:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:17.648 06:43:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:18.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.585 06:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:18.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.585 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.862 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:18.862 06:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:18.862 06:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:19.121 true 00:37:19.121 06:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:19.121 06:43:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.057 06:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.057 06:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:20.057 06:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:20.316 true 00:37:20.316 06:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:20.316 06:43:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.575 06:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.575 06:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:20.575 06:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:20.834 true 00:37:20.834 06:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:20.834 06:43:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:21.771 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.030 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:22.030 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:22.289 true 00:37:22.289 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:22.289 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.548 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.548 06:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:22.548 06:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:22.806 true 00:37:22.806 06:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:22.807 06:43:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.188 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:24.188 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:24.446 true 00:37:24.446 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:24.446 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.384 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:25.384 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:25.384 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:25.643 true 00:37:25.643 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:25.643 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.902 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.161 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:26.161 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:26.161 true 00:37:26.161 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:26.161 06:43:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:27.539 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:27.539 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:27.539 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:27.539 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:27.539 true 00:37:27.539 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:27.539 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:27.798 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.057 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:28.057 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:28.316 true 00:37:28.316 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:28.316 06:43:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.719 06:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:29.719 06:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:30.046 true 00:37:30.047 06:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:30.047 06:43:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:30.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:30.636 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.636 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:30.895 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:30.895 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:30.895 Initializing NVMe Controllers 00:37:30.895 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:30.895 Controller IO queue size 128, less than required. 00:37:30.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:30.895 Controller IO queue size 128, less than required. 00:37:30.895 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:30.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:30.895 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:30.895 Initialization complete. Launching workers. 00:37:30.895 ======================================================== 00:37:30.895 Latency(us) 00:37:30.895 Device Information : IOPS MiB/s Average min max 00:37:30.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2054.34 1.00 43036.85 1893.30 1058601.64 00:37:30.895 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18447.28 9.01 6938.90 1346.22 368357.93 00:37:30.895 ======================================================== 00:37:30.895 Total : 20501.62 10.01 10556.04 1346.22 1058601.64 00:37:30.895 00:37:31.154 true 00:37:31.154 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1218117 00:37:31.154 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1218117) - No such process 00:37:31.154 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1218117 00:37:31.154 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.154 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:31.413 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:31.413 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:31.413 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:31.413 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:31.413 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:31.672 null0 00:37:31.672 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:31.672 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:31.672 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:31.930 null1 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:31.930 null2 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:31.930 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:32.189 null3 00:37:32.189 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:32.189 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:32.189 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:32.448 null4 00:37:32.448 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:32.448 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:32.448 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:32.448 null5 00:37:32.448 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:32.448 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:32.448 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:32.707 null6 00:37:32.707 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:32.707 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:32.707 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:32.966 null7 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:32.966 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1223296 1223298 1223299 1223301 1223303 1223305 1223307 1223308 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:32.967 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.226 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.486 06:43:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:33.486 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.744 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:33.745 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:34.003 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:34.262 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:34.522 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:34.522 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:34.522 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:34.522 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:34.522 06:43:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:34.522 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:34.781 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.040 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.041 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.300 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.559 06:43:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:35.559 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:35.818 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:36.077 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:36.336 06:43:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:36.595 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:36.854 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:37.113 rmmod nvme_tcp 00:37:37.113 rmmod nvme_fabrics 00:37:37.113 rmmod nvme_keyring 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1217859 ']' 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1217859 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1217859 ']' 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1217859 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1217859 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1217859' 00:37:37.113 killing process with pid 1217859 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1217859 00:37:37.113 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1217859 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:37.373 06:43:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.909 06:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:39.909 00:37:39.909 real 0m46.854s 00:37:39.909 user 2m55.067s 00:37:39.909 sys 0m19.381s 00:37:39.909 06:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:39.909 06:43:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:39.909 ************************************ 00:37:39.909 END TEST nvmf_ns_hotplug_stress 00:37:39.909 ************************************ 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:39.909 ************************************ 00:37:39.909 START TEST nvmf_delete_subsystem 00:37:39.909 ************************************ 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:39.909 * Looking for test storage... 00:37:39.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:39.909 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:39.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.910 --rc genhtml_branch_coverage=1 00:37:39.910 --rc genhtml_function_coverage=1 00:37:39.910 --rc genhtml_legend=1 00:37:39.910 --rc geninfo_all_blocks=1 00:37:39.910 --rc geninfo_unexecuted_blocks=1 00:37:39.910 00:37:39.910 ' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:39.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.910 --rc genhtml_branch_coverage=1 00:37:39.910 --rc genhtml_function_coverage=1 00:37:39.910 --rc genhtml_legend=1 00:37:39.910 --rc geninfo_all_blocks=1 00:37:39.910 --rc geninfo_unexecuted_blocks=1 00:37:39.910 00:37:39.910 ' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:39.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.910 --rc genhtml_branch_coverage=1 00:37:39.910 --rc genhtml_function_coverage=1 00:37:39.910 --rc genhtml_legend=1 00:37:39.910 --rc geninfo_all_blocks=1 00:37:39.910 --rc geninfo_unexecuted_blocks=1 00:37:39.910 00:37:39.910 ' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:39.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:39.910 --rc genhtml_branch_coverage=1 00:37:39.910 --rc genhtml_function_coverage=1 00:37:39.910 --rc genhtml_legend=1 00:37:39.910 --rc geninfo_all_blocks=1 00:37:39.910 --rc geninfo_unexecuted_blocks=1 00:37:39.910 00:37:39.910 ' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.910 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:39.911 06:43:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:37:45.185 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:45.186 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:45.186 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:45.186 Found net devices under 0000:af:00.0: cvl_0_0 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:45.186 Found net devices under 0000:af:00.1: cvl_0_1 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:45.186 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:45.446 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:45.446 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:45.446 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:45.446 06:43:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:45.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:45.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.322 ms 00:37:45.446 00:37:45.446 --- 10.0.0.2 ping statistics --- 00:37:45.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.446 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:45.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:45.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:37:45.446 00:37:45.446 --- 10.0.0.1 ping statistics --- 00:37:45.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:45.446 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1227463 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1227463 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1227463 ']' 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:45.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:45.446 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:45.705 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.706 [2024-12-13 06:43:37.145190] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:45.706 [2024-12-13 06:43:37.146140] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:37:45.706 [2024-12-13 06:43:37.146177] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:45.706 [2024-12-13 06:43:37.225999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:37:45.706 [2024-12-13 06:43:37.248764] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:45.706 [2024-12-13 06:43:37.248799] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:45.706 [2024-12-13 06:43:37.248806] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:45.706 [2024-12-13 06:43:37.248813] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:45.706 [2024-12-13 06:43:37.248818] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:45.706 [2024-12-13 06:43:37.249917] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:45.706 [2024-12-13 06:43:37.249920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:37:45.706 [2024-12-13 06:43:37.313664] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:45.706 [2024-12-13 06:43:37.314183] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:45.706 [2024-12-13 06:43:37.314385] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:45.706 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:45.706 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:37:45.706 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:45.706 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:45.706 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 [2024-12-13 06:43:37.382808] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 [2024-12-13 06:43:37.407315] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 NULL1 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 Delay0 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1227608 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:37:45.965 06:43:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:45.965 [2024-12-13 06:43:37.519168] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:47.869 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:47.869 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:47.869 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:48.128 Read completed with error (sct=0, sc=8) 00:37:48.128 Read completed with error (sct=0, sc=8) 00:37:48.128 Write completed with error (sct=0, sc=8) 00:37:48.128 Read completed with error (sct=0, sc=8) 00:37:48.128 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 [2024-12-13 06:43:39.726671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14825e0 is same with the state(6) to be set 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 starting I/O failed: -6 00:37:48.129 [2024-12-13 06:43:39.727666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc31800d4d0 is same with the state(6) to be set 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.129 Read completed with error (sct=0, sc=8) 00:37:48.129 Write completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 Write completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 Read completed with error (sct=0, sc=8) 00:37:48.130 [2024-12-13 06:43:39.728039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1481f70 is same with the state(6) to be set 00:37:49.066 [2024-12-13 06:43:40.694518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1480190 is same with the state(6) to be set 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 [2024-12-13 06:43:40.728546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc31800d060 is same with the state(6) to be set 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 [2024-12-13 06:43:40.728729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fc31800d800 is same with the state(6) to be set 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 [2024-12-13 06:43:40.730138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1482400 is same with the state(6) to be set 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Read completed with error (sct=0, sc=8) 00:37:49.326 Write completed with error (sct=0, sc=8) 00:37:49.326 [2024-12-13 06:43:40.731069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14827c0 is same with the state(6) to be set 00:37:49.326 Initializing NVMe Controllers 00:37:49.326 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:49.326 Controller IO queue size 128, less than required. 00:37:49.326 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:49.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:49.326 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:49.326 Initialization complete. Launching workers. 00:37:49.326 ======================================================== 00:37:49.326 Latency(us) 00:37:49.326 Device Information : IOPS MiB/s Average min max 00:37:49.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 169.38 0.08 897088.21 632.02 1043277.90 00:37:49.326 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162.92 0.08 910235.18 378.41 1011538.82 00:37:49.326 ======================================================== 00:37:49.326 Total : 332.30 0.16 903533.96 378.41 1043277.90 00:37:49.326 00:37:49.326 [2024-12-13 06:43:40.731500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1480190 (9): Bad file descriptor 00:37:49.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:37:49.326 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.326 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:37:49.326 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227608 00:37:49.326 06:43:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:37:49.586 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1227608 00:37:49.845 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1227608) - No such process 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1227608 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1227608 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1227608 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:49.845 [2024-12-13 06:43:41.266958] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1228094 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:49.845 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:49.845 [2024-12-13 06:43:41.351214] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:37:50.413 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:50.413 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:50.413 06:43:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:50.671 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:50.671 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:50.671 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:51.239 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:51.239 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:51.239 06:43:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:51.806 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:51.806 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:51.806 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:52.374 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:52.374 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:52.374 06:43:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:52.942 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:52.942 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:52.942 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:37:52.942 Initializing NVMe Controllers 00:37:52.942 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:52.942 Controller IO queue size 128, less than required. 00:37:52.942 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:52.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:37:52.942 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:37:52.942 Initialization complete. Launching workers. 00:37:52.942 ======================================================== 00:37:52.942 Latency(us) 00:37:52.942 Device Information : IOPS MiB/s Average min max 00:37:52.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002975.26 1000198.00 1009293.06 00:37:52.942 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005286.07 1000247.65 1041909.12 00:37:52.942 ======================================================== 00:37:52.942 Total : 256.00 0.12 1004130.67 1000198.00 1041909.12 00:37:52.942 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1228094 00:37:53.201 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1228094) - No such process 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1228094 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:53.201 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:53.201 rmmod nvme_tcp 00:37:53.201 rmmod nvme_fabrics 00:37:53.201 rmmod nvme_keyring 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1227463 ']' 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1227463 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1227463 ']' 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1227463 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1227463 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1227463' 00:37:53.460 killing process with pid 1227463 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1227463 00:37:53.460 06:43:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1227463 00:37:53.460 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:53.461 06:43:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.002 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:56.002 00:37:56.002 real 0m16.099s 00:37:56.002 user 0m26.322s 00:37:56.002 sys 0m6.031s 00:37:56.002 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:56.002 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:37:56.002 ************************************ 00:37:56.002 END TEST nvmf_delete_subsystem 00:37:56.002 ************************************ 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:56.003 ************************************ 00:37:56.003 START TEST nvmf_host_management 00:37:56.003 ************************************ 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:37:56.003 * Looking for test storage... 00:37:56.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.003 --rc genhtml_branch_coverage=1 00:37:56.003 --rc genhtml_function_coverage=1 00:37:56.003 --rc genhtml_legend=1 00:37:56.003 --rc geninfo_all_blocks=1 00:37:56.003 --rc geninfo_unexecuted_blocks=1 00:37:56.003 00:37:56.003 ' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.003 --rc genhtml_branch_coverage=1 00:37:56.003 --rc genhtml_function_coverage=1 00:37:56.003 --rc genhtml_legend=1 00:37:56.003 --rc geninfo_all_blocks=1 00:37:56.003 --rc geninfo_unexecuted_blocks=1 00:37:56.003 00:37:56.003 ' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.003 --rc genhtml_branch_coverage=1 00:37:56.003 --rc genhtml_function_coverage=1 00:37:56.003 --rc genhtml_legend=1 00:37:56.003 --rc geninfo_all_blocks=1 00:37:56.003 --rc geninfo_unexecuted_blocks=1 00:37:56.003 00:37:56.003 ' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:56.003 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:56.003 --rc genhtml_branch_coverage=1 00:37:56.003 --rc genhtml_function_coverage=1 00:37:56.003 --rc genhtml_legend=1 00:37:56.003 --rc geninfo_all_blocks=1 00:37:56.003 --rc geninfo_unexecuted_blocks=1 00:37:56.003 00:37:56.003 ' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:56.003 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:37:56.004 06:43:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:02.574 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:02.575 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:02.575 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:02.575 Found net devices under 0000:af:00.0: cvl_0_0 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:02.575 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:02.575 Found net devices under 0000:af:00.1: cvl_0_1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:02.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:02.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:38:02.575 00:38:02.575 --- 10.0.0.2 ping statistics --- 00:38:02.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.575 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:38:02.575 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:02.575 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:02.575 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:38:02.575 00:38:02.575 --- 10.0.0.1 ping statistics --- 00:38:02.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:02.575 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1232198 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1232198 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1232198 ']' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:02.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 [2024-12-13 06:43:53.349178] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:02.576 [2024-12-13 06:43:53.350079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:02.576 [2024-12-13 06:43:53.350110] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:02.576 [2024-12-13 06:43:53.416111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:02.576 [2024-12-13 06:43:53.439812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:02.576 [2024-12-13 06:43:53.439850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:02.576 [2024-12-13 06:43:53.439857] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:02.576 [2024-12-13 06:43:53.439863] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:02.576 [2024-12-13 06:43:53.439867] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:02.576 [2024-12-13 06:43:53.441134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:02.576 [2024-12-13 06:43:53.441247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:02.576 [2024-12-13 06:43:53.441354] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:02.576 [2024-12-13 06:43:53.441355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:02.576 [2024-12-13 06:43:53.505247] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:02.576 [2024-12-13 06:43:53.506431] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:02.576 [2024-12-13 06:43:53.506469] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:02.576 [2024-12-13 06:43:53.506957] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:02.576 [2024-12-13 06:43:53.506991] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 [2024-12-13 06:43:53.573997] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 Malloc0 00:38:02.576 [2024-12-13 06:43:53.662397] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1232249 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1232249 /var/tmp/bdevperf.sock 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1232249 ']' 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:02.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:02.576 { 00:38:02.576 "params": { 00:38:02.576 "name": "Nvme$subsystem", 00:38:02.576 "trtype": "$TEST_TRANSPORT", 00:38:02.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:02.576 "adrfam": "ipv4", 00:38:02.576 "trsvcid": "$NVMF_PORT", 00:38:02.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:02.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:02.576 "hdgst": ${hdgst:-false}, 00:38:02.576 "ddgst": ${ddgst:-false} 00:38:02.576 }, 00:38:02.576 "method": "bdev_nvme_attach_controller" 00:38:02.576 } 00:38:02.576 EOF 00:38:02.576 )") 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:02.576 06:43:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:02.576 "params": { 00:38:02.576 "name": "Nvme0", 00:38:02.576 "trtype": "tcp", 00:38:02.576 "traddr": "10.0.0.2", 00:38:02.576 "adrfam": "ipv4", 00:38:02.576 "trsvcid": "4420", 00:38:02.576 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:02.576 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:02.576 "hdgst": false, 00:38:02.576 "ddgst": false 00:38:02.576 }, 00:38:02.576 "method": "bdev_nvme_attach_controller" 00:38:02.576 }' 00:38:02.577 [2024-12-13 06:43:53.758676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:02.577 [2024-12-13 06:43:53.758724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232249 ] 00:38:02.577 [2024-12-13 06:43:53.834858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:02.577 [2024-12-13 06:43:53.857094] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:02.577 Running I/O for 10 seconds... 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=103 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 103 -ge 100 ']' 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.577 [2024-12-13 06:43:54.145892] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145934] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145942] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145948] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145966] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145972] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145985] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.145991] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1458240 is same with the state(6) to be set 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:02.577 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:02.577 [2024-12-13 06:43:54.154591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:02.577 [2024-12-13 06:43:54.154623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154632] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:02.577 [2024-12-13 06:43:54.154640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:02.577 [2024-12-13 06:43:54.154654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:02.577 [2024-12-13 06:43:54.154668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fba490 is same with the state(6) to be set 00:38:02.577 [2024-12-13 06:43:54.154746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.577 [2024-12-13 06:43:54.154969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.577 [2024-12-13 06:43:54.154978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.154986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.154994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.578 [2024-12-13 06:43:54.155583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.578 [2024-12-13 06:43:54.155591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.155708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:02.579 [2024-12-13 06:43:54.155715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:02.579 [2024-12-13 06:43:54.156636] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:02.579 task offset: 24576 on job bdev=Nvme0n1 fails 00:38:02.579 00:38:02.579 Latency(us) 00:38:02.579 [2024-12-13T05:43:54.233Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:02.579 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:02.579 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:02.579 Verification LBA range: start 0x0 length 0x400 00:38:02.579 Nvme0n1 : 0.11 1756.46 109.78 585.49 0.00 25211.57 1482.36 27462.70 00:38:02.579 [2024-12-13T05:43:54.233Z] =================================================================================================================== 00:38:02.579 [2024-12-13T05:43:54.233Z] Total : 1756.46 109.78 585.49 0.00 25211.57 1482.36 27462.70 00:38:02.579 [2024-12-13 06:43:54.158975] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:02.579 [2024-12-13 06:43:54.158994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fba490 (9): Bad file descriptor 00:38:02.579 [2024-12-13 06:43:54.162108] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:38:02.579 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:02.579 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:03.515 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1232249 00:38:03.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1232249) - No such process 00:38:03.515 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:03.515 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:03.774 { 00:38:03.774 "params": { 00:38:03.774 "name": "Nvme$subsystem", 00:38:03.774 "trtype": "$TEST_TRANSPORT", 00:38:03.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:03.774 "adrfam": "ipv4", 00:38:03.774 "trsvcid": "$NVMF_PORT", 00:38:03.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:03.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:03.774 "hdgst": ${hdgst:-false}, 00:38:03.774 "ddgst": ${ddgst:-false} 00:38:03.774 }, 00:38:03.774 "method": "bdev_nvme_attach_controller" 00:38:03.774 } 00:38:03.774 EOF 00:38:03.774 )") 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:03.774 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:03.774 "params": { 00:38:03.774 "name": "Nvme0", 00:38:03.774 "trtype": "tcp", 00:38:03.774 "traddr": "10.0.0.2", 00:38:03.774 "adrfam": "ipv4", 00:38:03.774 "trsvcid": "4420", 00:38:03.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:03.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:03.774 "hdgst": false, 00:38:03.774 "ddgst": false 00:38:03.774 }, 00:38:03.774 "method": "bdev_nvme_attach_controller" 00:38:03.774 }' 00:38:03.774 [2024-12-13 06:43:55.216469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:03.774 [2024-12-13 06:43:55.216519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1232481 ] 00:38:03.774 [2024-12-13 06:43:55.291517] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:03.774 [2024-12-13 06:43:55.312241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:04.033 Running I/O for 1 seconds... 00:38:04.970 1995.00 IOPS, 124.69 MiB/s 00:38:04.970 Latency(us) 00:38:04.970 [2024-12-13T05:43:56.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:04.970 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:04.970 Verification LBA range: start 0x0 length 0x400 00:38:04.970 Nvme0n1 : 1.01 2043.66 127.73 0.00 0.00 30723.78 1903.66 27213.04 00:38:04.970 [2024-12-13T05:43:56.624Z] =================================================================================================================== 00:38:04.970 [2024-12-13T05:43:56.624Z] Total : 2043.66 127.73 0.00 0.00 30723.78 1903.66 27213.04 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:05.229 rmmod nvme_tcp 00:38:05.229 rmmod nvme_fabrics 00:38:05.229 rmmod nvme_keyring 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1232198 ']' 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1232198 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1232198 ']' 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1232198 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1232198 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1232198' 00:38:05.229 killing process with pid 1232198 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1232198 00:38:05.229 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1232198 00:38:05.488 [2024-12-13 06:43:56.963078] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:05.488 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:08.023 00:38:08.023 real 0m11.834s 00:38:08.023 user 0m16.004s 00:38:08.023 sys 0m5.994s 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:08.023 ************************************ 00:38:08.023 END TEST nvmf_host_management 00:38:08.023 ************************************ 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:08.023 ************************************ 00:38:08.023 START TEST nvmf_lvol 00:38:08.023 ************************************ 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:08.023 * Looking for test storage... 00:38:08.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:08.023 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.024 --rc genhtml_branch_coverage=1 00:38:08.024 --rc genhtml_function_coverage=1 00:38:08.024 --rc genhtml_legend=1 00:38:08.024 --rc geninfo_all_blocks=1 00:38:08.024 --rc geninfo_unexecuted_blocks=1 00:38:08.024 00:38:08.024 ' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.024 --rc genhtml_branch_coverage=1 00:38:08.024 --rc genhtml_function_coverage=1 00:38:08.024 --rc genhtml_legend=1 00:38:08.024 --rc geninfo_all_blocks=1 00:38:08.024 --rc geninfo_unexecuted_blocks=1 00:38:08.024 00:38:08.024 ' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.024 --rc genhtml_branch_coverage=1 00:38:08.024 --rc genhtml_function_coverage=1 00:38:08.024 --rc genhtml_legend=1 00:38:08.024 --rc geninfo_all_blocks=1 00:38:08.024 --rc geninfo_unexecuted_blocks=1 00:38:08.024 00:38:08.024 ' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:08.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:08.024 --rc genhtml_branch_coverage=1 00:38:08.024 --rc genhtml_function_coverage=1 00:38:08.024 --rc genhtml_legend=1 00:38:08.024 --rc geninfo_all_blocks=1 00:38:08.024 --rc geninfo_unexecuted_blocks=1 00:38:08.024 00:38:08.024 ' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:08.024 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:08.025 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:08.025 06:43:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:13.301 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:13.301 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.301 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:13.302 Found net devices under 0000:af:00.0: cvl_0_0 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:13.302 Found net devices under 0000:af:00.1: cvl_0_1 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:13.302 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:13.566 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:13.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:13.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.347 ms 00:38:13.566 00:38:13.566 --- 10.0.0.2 ping statistics --- 00:38:13.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.566 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:13.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:13.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:38:13.566 00:38:13.566 --- 10.0.0.1 ping statistics --- 00:38:13.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:13.566 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:13.566 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1236170 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1236170 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1236170 ']' 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:13.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:13.828 [2024-12-13 06:44:05.274542] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:13.828 [2024-12-13 06:44:05.275411] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:13.828 [2024-12-13 06:44:05.275440] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:13.828 [2024-12-13 06:44:05.352046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:13.828 [2024-12-13 06:44:05.381991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:13.828 [2024-12-13 06:44:05.382039] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:13.828 [2024-12-13 06:44:05.382049] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:13.828 [2024-12-13 06:44:05.382058] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:13.828 [2024-12-13 06:44:05.382065] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:13.828 [2024-12-13 06:44:05.383667] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:13.828 [2024-12-13 06:44:05.383782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:13.828 [2024-12-13 06:44:05.383782] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:13.828 [2024-12-13 06:44:05.457769] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:13.828 [2024-12-13 06:44:05.458275] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:13.828 [2024-12-13 06:44:05.458486] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:13.828 [2024-12-13 06:44:05.458678] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:13.828 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:14.086 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:14.086 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:14.086 [2024-12-13 06:44:05.688256] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:14.086 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.345 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:14.345 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:14.604 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:14.604 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:14.863 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:15.122 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a9e88c1d-375f-4e37-bfa7-a9de4b12b436 00:38:15.122 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a9e88c1d-375f-4e37-bfa7-a9de4b12b436 lvol 20 00:38:15.122 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=56023933-61eb-46b4-a1da-9553dd6e5168 00:38:15.122 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:15.380 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 56023933-61eb-46b4-a1da-9553dd6e5168 00:38:15.639 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:15.898 [2024-12-13 06:44:07.296413] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:15.898 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:15.898 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1236550 00:38:15.898 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:15.898 06:44:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:17.276 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 56023933-61eb-46b4-a1da-9553dd6e5168 MY_SNAPSHOT 00:38:17.276 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2f4758ad-3b77-4ab4-872c-a60244338f1f 00:38:17.276 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 56023933-61eb-46b4-a1da-9553dd6e5168 30 00:38:17.535 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2f4758ad-3b77-4ab4-872c-a60244338f1f MY_CLONE 00:38:17.794 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=bd33b0e9-e4d7-473e-afb8-b2bcecf86999 00:38:17.794 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate bd33b0e9-e4d7-473e-afb8-b2bcecf86999 00:38:18.102 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1236550 00:38:26.269 Initializing NVMe Controllers 00:38:26.269 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:26.269 Controller IO queue size 128, less than required. 00:38:26.269 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:26.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:26.269 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:26.269 Initialization complete. Launching workers. 00:38:26.269 ======================================================== 00:38:26.269 Latency(us) 00:38:26.269 Device Information : IOPS MiB/s Average min max 00:38:26.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12373.40 48.33 10348.46 1512.04 66396.74 00:38:26.269 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12571.30 49.11 10181.70 589.01 57786.74 00:38:26.269 ======================================================== 00:38:26.269 Total : 24944.70 97.44 10264.42 589.01 66396.74 00:38:26.269 00:38:26.269 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:26.528 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 56023933-61eb-46b4-a1da-9553dd6e5168 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a9e88c1d-375f-4e37-bfa7-a9de4b12b436 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:26.787 rmmod nvme_tcp 00:38:26.787 rmmod nvme_fabrics 00:38:26.787 rmmod nvme_keyring 00:38:26.787 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1236170 ']' 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1236170 ']' 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1236170' 00:38:27.046 killing process with pid 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1236170 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:27.046 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:27.305 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:27.305 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:27.305 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.305 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.305 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:29.209 00:38:29.209 real 0m21.633s 00:38:29.209 user 0m55.380s 00:38:29.209 sys 0m9.543s 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:29.209 ************************************ 00:38:29.209 END TEST nvmf_lvol 00:38:29.209 ************************************ 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:29.209 ************************************ 00:38:29.209 START TEST nvmf_lvs_grow 00:38:29.209 ************************************ 00:38:29.209 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:29.469 * Looking for test storage... 00:38:29.469 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:29.469 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.469 --rc genhtml_branch_coverage=1 00:38:29.469 --rc genhtml_function_coverage=1 00:38:29.469 --rc genhtml_legend=1 00:38:29.469 --rc geninfo_all_blocks=1 00:38:29.469 --rc geninfo_unexecuted_blocks=1 00:38:29.469 00:38:29.469 ' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.469 --rc genhtml_branch_coverage=1 00:38:29.469 --rc genhtml_function_coverage=1 00:38:29.469 --rc genhtml_legend=1 00:38:29.469 --rc geninfo_all_blocks=1 00:38:29.469 --rc geninfo_unexecuted_blocks=1 00:38:29.469 00:38:29.469 ' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.469 --rc genhtml_branch_coverage=1 00:38:29.469 --rc genhtml_function_coverage=1 00:38:29.469 --rc genhtml_legend=1 00:38:29.469 --rc geninfo_all_blocks=1 00:38:29.469 --rc geninfo_unexecuted_blocks=1 00:38:29.469 00:38:29.469 ' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:29.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:29.469 --rc genhtml_branch_coverage=1 00:38:29.469 --rc genhtml_function_coverage=1 00:38:29.469 --rc genhtml_legend=1 00:38:29.469 --rc geninfo_all_blocks=1 00:38:29.469 --rc geninfo_unexecuted_blocks=1 00:38:29.469 00:38:29.469 ' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.469 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:29.470 06:44:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:36.039 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:36.040 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:36.040 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:36.040 Found net devices under 0000:af:00.0: cvl_0_0 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:36.040 Found net devices under 0000:af:00.1: cvl_0_1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:36.040 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:36.040 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:38:36.040 00:38:36.040 --- 10.0.0.2 ping statistics --- 00:38:36.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.040 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:36.040 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:36.040 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:38:36.040 00:38:36.040 --- 10.0.0.1 ping statistics --- 00:38:36.040 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:36.040 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1241674 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1241674 00:38:36.040 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1241674 ']' 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:36.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:36.041 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.041 [2024-12-13 06:44:26.939164] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:36.041 [2024-12-13 06:44:26.940034] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:36.041 [2024-12-13 06:44:26.940065] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:36.041 [2024-12-13 06:44:27.018926] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:36.041 [2024-12-13 06:44:27.040363] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:36.041 [2024-12-13 06:44:27.040396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:36.041 [2024-12-13 06:44:27.040403] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:36.041 [2024-12-13 06:44:27.040409] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:36.041 [2024-12-13 06:44:27.040417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:36.041 [2024-12-13 06:44:27.040901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:36.041 [2024-12-13 06:44:27.103814] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:36.041 [2024-12-13 06:44:27.104014] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:36.041 [2024-12-13 06:44:27.345566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:36.041 ************************************ 00:38:36.041 START TEST lvs_grow_clean 00:38:36.041 ************************************ 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:36.041 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:36.300 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:36.300 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:36.300 06:44:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:36.559 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:36.559 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:36.559 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 lvol 150 00:38:36.818 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=3b3522f8-9841-497f-8089-726e5d6b6b49 00:38:36.818 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:36.818 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:36.818 [2024-12-13 06:44:28.405283] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:36.818 [2024-12-13 06:44:28.405404] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:36.818 true 00:38:36.818 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:36.818 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:37.077 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:37.077 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:37.336 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3b3522f8-9841-497f-8089-726e5d6b6b49 00:38:37.336 06:44:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:37.595 [2024-12-13 06:44:29.145776] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:37.595 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1242162 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1242162 /var/tmp/bdevperf.sock 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1242162 ']' 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:37.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:37.854 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:37.854 [2024-12-13 06:44:29.403137] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:37.854 [2024-12-13 06:44:29.403186] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1242162 ] 00:38:37.854 [2024-12-13 06:44:29.477272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:37.854 [2024-12-13 06:44:29.499612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:38.112 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:38.112 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:38.112 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:38.370 Nvme0n1 00:38:38.370 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:38.629 [ 00:38:38.629 { 00:38:38.629 "name": "Nvme0n1", 00:38:38.629 "aliases": [ 00:38:38.629 "3b3522f8-9841-497f-8089-726e5d6b6b49" 00:38:38.629 ], 00:38:38.629 "product_name": "NVMe disk", 00:38:38.629 "block_size": 4096, 00:38:38.629 "num_blocks": 38912, 00:38:38.629 "uuid": "3b3522f8-9841-497f-8089-726e5d6b6b49", 00:38:38.629 "numa_id": 1, 00:38:38.629 "assigned_rate_limits": { 00:38:38.629 "rw_ios_per_sec": 0, 00:38:38.629 "rw_mbytes_per_sec": 0, 00:38:38.629 "r_mbytes_per_sec": 0, 00:38:38.629 "w_mbytes_per_sec": 0 00:38:38.629 }, 00:38:38.629 "claimed": false, 00:38:38.629 "zoned": false, 00:38:38.629 "supported_io_types": { 00:38:38.629 "read": true, 00:38:38.629 "write": true, 00:38:38.629 "unmap": true, 00:38:38.629 "flush": true, 00:38:38.629 "reset": true, 00:38:38.629 "nvme_admin": true, 00:38:38.629 "nvme_io": true, 00:38:38.629 "nvme_io_md": false, 00:38:38.629 "write_zeroes": true, 00:38:38.629 "zcopy": false, 00:38:38.629 "get_zone_info": false, 00:38:38.629 "zone_management": false, 00:38:38.629 "zone_append": false, 00:38:38.629 "compare": true, 00:38:38.629 "compare_and_write": true, 00:38:38.629 "abort": true, 00:38:38.629 "seek_hole": false, 00:38:38.629 "seek_data": false, 00:38:38.629 "copy": true, 00:38:38.629 "nvme_iov_md": false 00:38:38.629 }, 00:38:38.629 "memory_domains": [ 00:38:38.629 { 00:38:38.629 "dma_device_id": "system", 00:38:38.629 "dma_device_type": 1 00:38:38.629 } 00:38:38.629 ], 00:38:38.629 "driver_specific": { 00:38:38.629 "nvme": [ 00:38:38.629 { 00:38:38.629 "trid": { 00:38:38.629 "trtype": "TCP", 00:38:38.629 "adrfam": "IPv4", 00:38:38.629 "traddr": "10.0.0.2", 00:38:38.629 "trsvcid": "4420", 00:38:38.629 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:38.629 }, 00:38:38.629 "ctrlr_data": { 00:38:38.629 "cntlid": 1, 00:38:38.629 "vendor_id": "0x8086", 00:38:38.629 "model_number": "SPDK bdev Controller", 00:38:38.629 "serial_number": "SPDK0", 00:38:38.629 "firmware_revision": "25.01", 00:38:38.629 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:38.629 "oacs": { 00:38:38.629 "security": 0, 00:38:38.629 "format": 0, 00:38:38.629 "firmware": 0, 00:38:38.629 "ns_manage": 0 00:38:38.629 }, 00:38:38.629 "multi_ctrlr": true, 00:38:38.629 "ana_reporting": false 00:38:38.629 }, 00:38:38.629 "vs": { 00:38:38.629 "nvme_version": "1.3" 00:38:38.629 }, 00:38:38.629 "ns_data": { 00:38:38.629 "id": 1, 00:38:38.629 "can_share": true 00:38:38.629 } 00:38:38.629 } 00:38:38.629 ], 00:38:38.629 "mp_policy": "active_passive" 00:38:38.629 } 00:38:38.629 } 00:38:38.629 ] 00:38:38.629 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1242167 00:38:38.629 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:38.629 06:44:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:38.629 Running I/O for 10 seconds... 00:38:40.007 Latency(us) 00:38:40.007 [2024-12-13T05:44:31.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:40.007 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.007 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:40.007 [2024-12-13T05:44:31.661Z] =================================================================================================================== 00:38:40.007 [2024-12-13T05:44:31.661Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:38:40.007 00:38:40.575 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:40.833 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:40.833 Nvme0n1 : 2.00 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:40.833 [2024-12-13T05:44:32.488Z] =================================================================================================================== 00:38:40.834 [2024-12-13T05:44:32.488Z] Total : 23114.00 90.29 0.00 0.00 0.00 0.00 0.00 00:38:40.834 00:38:40.834 true 00:38:40.834 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:40.834 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:41.092 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:41.092 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:41.092 06:44:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1242167 00:38:41.660 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:41.660 Nvme0n1 : 3.00 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:41.660 [2024-12-13T05:44:33.314Z] =================================================================================================================== 00:38:41.660 [2024-12-13T05:44:33.314Z] Total : 23241.00 90.79 0.00 0.00 0.00 0.00 0.00 00:38:41.660 00:38:43.037 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.037 Nvme0n1 : 4.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:43.037 [2024-12-13T05:44:34.691Z] =================================================================================================================== 00:38:43.037 [2024-12-13T05:44:34.691Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:38:43.037 00:38:43.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:43.983 Nvme0n1 : 5.00 23422.20 91.49 0.00 0.00 0.00 0.00 0.00 00:38:43.983 [2024-12-13T05:44:35.637Z] =================================================================================================================== 00:38:43.983 [2024-12-13T05:44:35.637Z] Total : 23422.20 91.49 0.00 0.00 0.00 0.00 0.00 00:38:43.983 00:38:44.920 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:44.920 Nvme0n1 : 6.00 23476.67 91.71 0.00 0.00 0.00 0.00 0.00 00:38:44.920 [2024-12-13T05:44:36.574Z] =================================================================================================================== 00:38:44.920 [2024-12-13T05:44:36.574Z] Total : 23476.67 91.71 0.00 0.00 0.00 0.00 0.00 00:38:44.920 00:38:45.857 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:45.857 Nvme0n1 : 7.00 23515.57 91.86 0.00 0.00 0.00 0.00 0.00 00:38:45.857 [2024-12-13T05:44:37.511Z] =================================================================================================================== 00:38:45.857 [2024-12-13T05:44:37.511Z] Total : 23515.57 91.86 0.00 0.00 0.00 0.00 0.00 00:38:45.857 00:38:46.793 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:46.793 Nvme0n1 : 8.00 23544.75 91.97 0.00 0.00 0.00 0.00 0.00 00:38:46.793 [2024-12-13T05:44:38.447Z] =================================================================================================================== 00:38:46.793 [2024-12-13T05:44:38.447Z] Total : 23544.75 91.97 0.00 0.00 0.00 0.00 0.00 00:38:46.793 00:38:47.730 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:47.730 Nvme0n1 : 9.00 23567.44 92.06 0.00 0.00 0.00 0.00 0.00 00:38:47.730 [2024-12-13T05:44:39.384Z] =================================================================================================================== 00:38:47.730 [2024-12-13T05:44:39.384Z] Total : 23567.44 92.06 0.00 0.00 0.00 0.00 0.00 00:38:47.730 00:38:48.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.666 Nvme0n1 : 10.00 23572.90 92.08 0.00 0.00 0.00 0.00 0.00 00:38:48.666 [2024-12-13T05:44:40.320Z] =================================================================================================================== 00:38:48.666 [2024-12-13T05:44:40.320Z] Total : 23572.90 92.08 0.00 0.00 0.00 0.00 0.00 00:38:48.666 00:38:48.666 00:38:48.666 Latency(us) 00:38:48.666 [2024-12-13T05:44:40.320Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.666 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:48.666 Nvme0n1 : 10.00 23577.36 92.10 0.00 0.00 5426.02 3229.99 26089.57 00:38:48.666 [2024-12-13T05:44:40.320Z] =================================================================================================================== 00:38:48.666 [2024-12-13T05:44:40.320Z] Total : 23577.36 92.10 0.00 0.00 5426.02 3229.99 26089.57 00:38:48.666 { 00:38:48.666 "results": [ 00:38:48.666 { 00:38:48.666 "job": "Nvme0n1", 00:38:48.666 "core_mask": "0x2", 00:38:48.666 "workload": "randwrite", 00:38:48.666 "status": "finished", 00:38:48.666 "queue_depth": 128, 00:38:48.666 "io_size": 4096, 00:38:48.666 "runtime": 10.003537, 00:38:48.666 "iops": 23577.360687524822, 00:38:48.666 "mibps": 92.09906518564384, 00:38:48.666 "io_failed": 0, 00:38:48.666 "io_timeout": 0, 00:38:48.666 "avg_latency_us": 5426.021012037762, 00:38:48.666 "min_latency_us": 3229.9885714285715, 00:38:48.666 "max_latency_us": 26089.569523809525 00:38:48.666 } 00:38:48.666 ], 00:38:48.666 "core_count": 1 00:38:48.666 } 00:38:48.666 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1242162 00:38:48.666 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1242162 ']' 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1242162 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1242162 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1242162' 00:38:48.925 killing process with pid 1242162 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1242162 00:38:48.925 Received shutdown signal, test time was about 10.000000 seconds 00:38:48.925 00:38:48.925 Latency(us) 00:38:48.925 [2024-12-13T05:44:40.579Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:48.925 [2024-12-13T05:44:40.579Z] =================================================================================================================== 00:38:48.925 [2024-12-13T05:44:40.579Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1242162 00:38:48.925 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:49.184 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:49.443 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:49.443 06:44:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:49.701 [2024-12-13 06:44:41.305349] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.701 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:49.959 request: 00:38:49.959 { 00:38:49.959 "uuid": "ab8e734f-56e0-4be4-9cb7-c08bfc6a8688", 00:38:49.959 "method": "bdev_lvol_get_lvstores", 00:38:49.959 "req_id": 1 00:38:49.959 } 00:38:49.959 Got JSON-RPC error response 00:38:49.959 response: 00:38:49.959 { 00:38:49.959 "code": -19, 00:38:49.959 "message": "No such device" 00:38:49.959 } 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:49.959 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:50.217 aio_bdev 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3b3522f8-9841-497f-8089-726e5d6b6b49 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=3b3522f8-9841-497f-8089-726e5d6b6b49 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:38:50.217 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:38:50.476 06:44:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3b3522f8-9841-497f-8089-726e5d6b6b49 -t 2000 00:38:50.476 [ 00:38:50.476 { 00:38:50.476 "name": "3b3522f8-9841-497f-8089-726e5d6b6b49", 00:38:50.476 "aliases": [ 00:38:50.476 "lvs/lvol" 00:38:50.476 ], 00:38:50.476 "product_name": "Logical Volume", 00:38:50.476 "block_size": 4096, 00:38:50.476 "num_blocks": 38912, 00:38:50.476 "uuid": "3b3522f8-9841-497f-8089-726e5d6b6b49", 00:38:50.476 "assigned_rate_limits": { 00:38:50.476 "rw_ios_per_sec": 0, 00:38:50.476 "rw_mbytes_per_sec": 0, 00:38:50.476 "r_mbytes_per_sec": 0, 00:38:50.476 "w_mbytes_per_sec": 0 00:38:50.476 }, 00:38:50.476 "claimed": false, 00:38:50.476 "zoned": false, 00:38:50.476 "supported_io_types": { 00:38:50.476 "read": true, 00:38:50.476 "write": true, 00:38:50.476 "unmap": true, 00:38:50.476 "flush": false, 00:38:50.476 "reset": true, 00:38:50.476 "nvme_admin": false, 00:38:50.476 "nvme_io": false, 00:38:50.476 "nvme_io_md": false, 00:38:50.476 "write_zeroes": true, 00:38:50.476 "zcopy": false, 00:38:50.476 "get_zone_info": false, 00:38:50.476 "zone_management": false, 00:38:50.476 "zone_append": false, 00:38:50.476 "compare": false, 00:38:50.476 "compare_and_write": false, 00:38:50.476 "abort": false, 00:38:50.476 "seek_hole": true, 00:38:50.476 "seek_data": true, 00:38:50.476 "copy": false, 00:38:50.476 "nvme_iov_md": false 00:38:50.476 }, 00:38:50.476 "driver_specific": { 00:38:50.476 "lvol": { 00:38:50.476 "lvol_store_uuid": "ab8e734f-56e0-4be4-9cb7-c08bfc6a8688", 00:38:50.476 "base_bdev": "aio_bdev", 00:38:50.476 "thin_provision": false, 00:38:50.476 "num_allocated_clusters": 38, 00:38:50.476 "snapshot": false, 00:38:50.476 "clone": false, 00:38:50.476 "esnap_clone": false 00:38:50.476 } 00:38:50.476 } 00:38:50.476 } 00:38:50.476 ] 00:38:50.476 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:38:50.476 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:50.476 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:38:50.735 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:38:50.735 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:38:50.735 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:50.994 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:38:50.994 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3b3522f8-9841-497f-8089-726e5d6b6b49 00:38:51.253 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u ab8e734f-56e0-4be4-9cb7-c08bfc6a8688 00:38:51.253 06:44:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:51.513 00:38:51.513 real 0m15.717s 00:38:51.513 user 0m15.239s 00:38:51.513 sys 0m1.493s 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:51.513 ************************************ 00:38:51.513 END TEST lvs_grow_clean 00:38:51.513 ************************************ 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:51.513 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:51.772 ************************************ 00:38:51.772 START TEST lvs_grow_dirty 00:38:51.772 ************************************ 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:51.772 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:52.031 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:52.031 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:52.031 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:38:52.031 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:38:52.031 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:52.290 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:52.290 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:52.290 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 lvol 150 00:38:52.549 06:44:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:38:52.549 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:52.549 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:52.549 [2024-12-13 06:44:44.169279] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:52.549 [2024-12-13 06:44:44.169404] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:52.549 true 00:38:52.549 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:38:52.549 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:52.808 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:52.808 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:53.067 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:38:53.326 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:53.326 [2024-12-13 06:44:44.929721] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:53.326 06:44:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1244666 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1244666 /var/tmp/bdevperf.sock 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1244666 ']' 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:53.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:53.586 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:38:53.586 [2024-12-13 06:44:45.159535] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:38:53.586 [2024-12-13 06:44:45.159582] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1244666 ] 00:38:53.586 [2024-12-13 06:44:45.232736] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:53.844 [2024-12-13 06:44:45.255085] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:53.844 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:53.844 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:38:53.844 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:54.103 Nvme0n1 00:38:54.103 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:54.362 [ 00:38:54.362 { 00:38:54.362 "name": "Nvme0n1", 00:38:54.362 "aliases": [ 00:38:54.362 "9bed5646-509a-4357-a4fc-08f8d1fbeb18" 00:38:54.362 ], 00:38:54.362 "product_name": "NVMe disk", 00:38:54.362 "block_size": 4096, 00:38:54.362 "num_blocks": 38912, 00:38:54.362 "uuid": "9bed5646-509a-4357-a4fc-08f8d1fbeb18", 00:38:54.362 "numa_id": 1, 00:38:54.362 "assigned_rate_limits": { 00:38:54.362 "rw_ios_per_sec": 0, 00:38:54.362 "rw_mbytes_per_sec": 0, 00:38:54.362 "r_mbytes_per_sec": 0, 00:38:54.362 "w_mbytes_per_sec": 0 00:38:54.362 }, 00:38:54.362 "claimed": false, 00:38:54.362 "zoned": false, 00:38:54.362 "supported_io_types": { 00:38:54.362 "read": true, 00:38:54.362 "write": true, 00:38:54.362 "unmap": true, 00:38:54.362 "flush": true, 00:38:54.362 "reset": true, 00:38:54.362 "nvme_admin": true, 00:38:54.362 "nvme_io": true, 00:38:54.362 "nvme_io_md": false, 00:38:54.362 "write_zeroes": true, 00:38:54.362 "zcopy": false, 00:38:54.362 "get_zone_info": false, 00:38:54.362 "zone_management": false, 00:38:54.362 "zone_append": false, 00:38:54.362 "compare": true, 00:38:54.362 "compare_and_write": true, 00:38:54.362 "abort": true, 00:38:54.362 "seek_hole": false, 00:38:54.362 "seek_data": false, 00:38:54.362 "copy": true, 00:38:54.362 "nvme_iov_md": false 00:38:54.362 }, 00:38:54.362 "memory_domains": [ 00:38:54.362 { 00:38:54.362 "dma_device_id": "system", 00:38:54.362 "dma_device_type": 1 00:38:54.362 } 00:38:54.362 ], 00:38:54.362 "driver_specific": { 00:38:54.362 "nvme": [ 00:38:54.362 { 00:38:54.362 "trid": { 00:38:54.362 "trtype": "TCP", 00:38:54.362 "adrfam": "IPv4", 00:38:54.362 "traddr": "10.0.0.2", 00:38:54.362 "trsvcid": "4420", 00:38:54.362 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:54.362 }, 00:38:54.362 "ctrlr_data": { 00:38:54.362 "cntlid": 1, 00:38:54.362 "vendor_id": "0x8086", 00:38:54.362 "model_number": "SPDK bdev Controller", 00:38:54.362 "serial_number": "SPDK0", 00:38:54.362 "firmware_revision": "25.01", 00:38:54.362 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:54.362 "oacs": { 00:38:54.362 "security": 0, 00:38:54.362 "format": 0, 00:38:54.362 "firmware": 0, 00:38:54.362 "ns_manage": 0 00:38:54.362 }, 00:38:54.362 "multi_ctrlr": true, 00:38:54.362 "ana_reporting": false 00:38:54.362 }, 00:38:54.362 "vs": { 00:38:54.362 "nvme_version": "1.3" 00:38:54.362 }, 00:38:54.362 "ns_data": { 00:38:54.362 "id": 1, 00:38:54.362 "can_share": true 00:38:54.362 } 00:38:54.362 } 00:38:54.362 ], 00:38:54.362 "mp_policy": "active_passive" 00:38:54.362 } 00:38:54.362 } 00:38:54.362 ] 00:38:54.362 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1244684 00:38:54.362 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:54.362 06:44:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:54.362 Running I/O for 10 seconds... 00:38:55.738 Latency(us) 00:38:55.738 [2024-12-13T05:44:47.392Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:55.738 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:55.738 Nvme0n1 : 1.00 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:55.738 [2024-12-13T05:44:47.392Z] =================================================================================================================== 00:38:55.738 [2024-12-13T05:44:47.392Z] Total : 22860.00 89.30 0.00 0.00 0.00 0.00 0.00 00:38:55.738 00:38:56.305 06:44:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:38:56.564 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:56.564 Nvme0n1 : 2.00 23146.00 90.41 0.00 0.00 0.00 0.00 0.00 00:38:56.564 [2024-12-13T05:44:48.218Z] =================================================================================================================== 00:38:56.564 [2024-12-13T05:44:48.218Z] Total : 23146.00 90.41 0.00 0.00 0.00 0.00 0.00 00:38:56.564 00:38:56.564 true 00:38:56.564 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:38:56.564 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:38:56.823 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:38:56.823 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:38:56.823 06:44:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1244684 00:38:57.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:57.390 Nvme0n1 : 3.00 23257.33 90.85 0.00 0.00 0.00 0.00 0.00 00:38:57.390 [2024-12-13T05:44:49.044Z] =================================================================================================================== 00:38:57.390 [2024-12-13T05:44:49.044Z] Total : 23257.33 90.85 0.00 0.00 0.00 0.00 0.00 00:38:57.390 00:38:58.767 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:58.767 Nvme0n1 : 4.00 23348.50 91.21 0.00 0.00 0.00 0.00 0.00 00:38:58.767 [2024-12-13T05:44:50.421Z] =================================================================================================================== 00:38:58.767 [2024-12-13T05:44:50.422Z] Total : 23348.50 91.21 0.00 0.00 0.00 0.00 0.00 00:38:58.768 00:38:59.704 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.704 Nvme0n1 : 5.00 23308.40 91.05 0.00 0.00 0.00 0.00 0.00 00:38:59.704 [2024-12-13T05:44:51.358Z] =================================================================================================================== 00:38:59.704 [2024-12-13T05:44:51.358Z] Total : 23308.40 91.05 0.00 0.00 0.00 0.00 0.00 00:38:59.704 00:39:00.639 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.639 Nvme0n1 : 6.00 23381.83 91.34 0.00 0.00 0.00 0.00 0.00 00:39:00.639 [2024-12-13T05:44:52.293Z] =================================================================================================================== 00:39:00.639 [2024-12-13T05:44:52.293Z] Total : 23381.83 91.34 0.00 0.00 0.00 0.00 0.00 00:39:00.639 00:39:01.575 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:01.575 Nvme0n1 : 7.00 23434.29 91.54 0.00 0.00 0.00 0.00 0.00 00:39:01.575 [2024-12-13T05:44:53.229Z] =================================================================================================================== 00:39:01.575 [2024-12-13T05:44:53.229Z] Total : 23434.29 91.54 0.00 0.00 0.00 0.00 0.00 00:39:01.575 00:39:02.511 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.511 Nvme0n1 : 8.00 23473.62 91.69 0.00 0.00 0.00 0.00 0.00 00:39:02.511 [2024-12-13T05:44:54.165Z] =================================================================================================================== 00:39:02.511 [2024-12-13T05:44:54.165Z] Total : 23473.62 91.69 0.00 0.00 0.00 0.00 0.00 00:39:02.511 00:39:03.448 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.448 Nvme0n1 : 9.00 23508.00 91.83 0.00 0.00 0.00 0.00 0.00 00:39:03.448 [2024-12-13T05:44:55.102Z] =================================================================================================================== 00:39:03.448 [2024-12-13T05:44:55.102Z] Total : 23508.00 91.83 0.00 0.00 0.00 0.00 0.00 00:39:03.448 00:39:04.824 00:39:04.824 Latency(us) 00:39:04.824 [2024-12-13T05:44:56.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.824 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:04.824 Nvme0n1 : 10.00 23542.07 91.96 0.00 0.00 5433.86 2808.69 25465.42 00:39:04.824 [2024-12-13T05:44:56.478Z] =================================================================================================================== 00:39:04.824 [2024-12-13T05:44:56.478Z] Total : 23542.07 91.96 0.00 0.00 5433.86 2808.69 25465.42 00:39:04.824 { 00:39:04.824 "results": [ 00:39:04.824 { 00:39:04.824 "job": "Nvme0n1", 00:39:04.824 "core_mask": "0x2", 00:39:04.824 "workload": "randwrite", 00:39:04.824 "status": "finished", 00:39:04.824 "queue_depth": 128, 00:39:04.824 "io_size": 4096, 00:39:04.824 "runtime": 10.001203, 00:39:04.824 "iops": 23542.067889232927, 00:39:04.824 "mibps": 91.96120269231612, 00:39:04.824 "io_failed": 0, 00:39:04.824 "io_timeout": 0, 00:39:04.824 "avg_latency_us": 5433.86342076709, 00:39:04.824 "min_latency_us": 2808.6857142857143, 00:39:04.824 "max_latency_us": 25465.417142857143 00:39:04.824 } 00:39:04.824 ], 00:39:04.824 "core_count": 1 00:39:04.824 } 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1244666 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1244666 ']' 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1244666 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1244666 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1244666' 00:39:04.824 killing process with pid 1244666 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1244666 00:39:04.824 Received shutdown signal, test time was about 10.000000 seconds 00:39:04.824 00:39:04.824 Latency(us) 00:39:04.824 [2024-12-13T05:44:56.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:04.824 [2024-12-13T05:44:56.478Z] =================================================================================================================== 00:39:04.824 [2024-12-13T05:44:56.478Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1244666 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:04.824 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:05.083 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:05.083 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1241674 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1241674 00:39:05.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1241674 Killed "${NVMF_APP[@]}" "$@" 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=1246460 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 1246460 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1246460 ']' 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:05.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:05.342 06:44:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:05.342 [2024-12-13 06:44:56.946664] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:05.342 [2024-12-13 06:44:56.947569] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:05.342 [2024-12-13 06:44:56.947606] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:05.602 [2024-12-13 06:44:57.026219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:05.602 [2024-12-13 06:44:57.047561] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:05.602 [2024-12-13 06:44:57.047596] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:05.602 [2024-12-13 06:44:57.047603] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:05.602 [2024-12-13 06:44:57.047609] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:05.602 [2024-12-13 06:44:57.047617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:05.602 [2024-12-13 06:44:57.048073] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:05.602 [2024-12-13 06:44:57.110887] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:05.602 [2024-12-13 06:44:57.111080] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:05.602 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:05.860 [2024-12-13 06:44:57.341431] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:05.860 [2024-12-13 06:44:57.341638] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:05.860 [2024-12-13 06:44:57.341723] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:05.860 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:05.861 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:06.119 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bed5646-509a-4357-a4fc-08f8d1fbeb18 -t 2000 00:39:06.119 [ 00:39:06.119 { 00:39:06.119 "name": "9bed5646-509a-4357-a4fc-08f8d1fbeb18", 00:39:06.119 "aliases": [ 00:39:06.119 "lvs/lvol" 00:39:06.119 ], 00:39:06.119 "product_name": "Logical Volume", 00:39:06.119 "block_size": 4096, 00:39:06.119 "num_blocks": 38912, 00:39:06.120 "uuid": "9bed5646-509a-4357-a4fc-08f8d1fbeb18", 00:39:06.120 "assigned_rate_limits": { 00:39:06.120 "rw_ios_per_sec": 0, 00:39:06.120 "rw_mbytes_per_sec": 0, 00:39:06.120 "r_mbytes_per_sec": 0, 00:39:06.120 "w_mbytes_per_sec": 0 00:39:06.120 }, 00:39:06.120 "claimed": false, 00:39:06.120 "zoned": false, 00:39:06.120 "supported_io_types": { 00:39:06.120 "read": true, 00:39:06.120 "write": true, 00:39:06.120 "unmap": true, 00:39:06.120 "flush": false, 00:39:06.120 "reset": true, 00:39:06.120 "nvme_admin": false, 00:39:06.120 "nvme_io": false, 00:39:06.120 "nvme_io_md": false, 00:39:06.120 "write_zeroes": true, 00:39:06.120 "zcopy": false, 00:39:06.120 "get_zone_info": false, 00:39:06.120 "zone_management": false, 00:39:06.120 "zone_append": false, 00:39:06.120 "compare": false, 00:39:06.120 "compare_and_write": false, 00:39:06.120 "abort": false, 00:39:06.120 "seek_hole": true, 00:39:06.120 "seek_data": true, 00:39:06.120 "copy": false, 00:39:06.120 "nvme_iov_md": false 00:39:06.120 }, 00:39:06.120 "driver_specific": { 00:39:06.120 "lvol": { 00:39:06.120 "lvol_store_uuid": "63f5ee09-5d17-4f21-b787-5f4e01cb3e73", 00:39:06.120 "base_bdev": "aio_bdev", 00:39:06.120 "thin_provision": false, 00:39:06.120 "num_allocated_clusters": 38, 00:39:06.120 "snapshot": false, 00:39:06.120 "clone": false, 00:39:06.120 "esnap_clone": false 00:39:06.120 } 00:39:06.120 } 00:39:06.120 } 00:39:06.120 ] 00:39:06.120 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:06.120 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:06.120 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:06.379 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:06.379 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:06.379 06:44:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:06.637 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:06.637 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:06.896 [2024-12-13 06:44:58.308633] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:06.896 request: 00:39:06.896 { 00:39:06.896 "uuid": "63f5ee09-5d17-4f21-b787-5f4e01cb3e73", 00:39:06.896 "method": "bdev_lvol_get_lvstores", 00:39:06.896 "req_id": 1 00:39:06.896 } 00:39:06.896 Got JSON-RPC error response 00:39:06.896 response: 00:39:06.896 { 00:39:06.896 "code": -19, 00:39:06.896 "message": "No such device" 00:39:06.896 } 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:06.896 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:07.155 aio_bdev 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:07.155 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:07.414 06:44:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 9bed5646-509a-4357-a4fc-08f8d1fbeb18 -t 2000 00:39:07.673 [ 00:39:07.673 { 00:39:07.673 "name": "9bed5646-509a-4357-a4fc-08f8d1fbeb18", 00:39:07.673 "aliases": [ 00:39:07.673 "lvs/lvol" 00:39:07.673 ], 00:39:07.673 "product_name": "Logical Volume", 00:39:07.673 "block_size": 4096, 00:39:07.673 "num_blocks": 38912, 00:39:07.673 "uuid": "9bed5646-509a-4357-a4fc-08f8d1fbeb18", 00:39:07.673 "assigned_rate_limits": { 00:39:07.673 "rw_ios_per_sec": 0, 00:39:07.673 "rw_mbytes_per_sec": 0, 00:39:07.673 "r_mbytes_per_sec": 0, 00:39:07.673 "w_mbytes_per_sec": 0 00:39:07.673 }, 00:39:07.673 "claimed": false, 00:39:07.673 "zoned": false, 00:39:07.673 "supported_io_types": { 00:39:07.673 "read": true, 00:39:07.673 "write": true, 00:39:07.673 "unmap": true, 00:39:07.673 "flush": false, 00:39:07.673 "reset": true, 00:39:07.673 "nvme_admin": false, 00:39:07.673 "nvme_io": false, 00:39:07.673 "nvme_io_md": false, 00:39:07.673 "write_zeroes": true, 00:39:07.673 "zcopy": false, 00:39:07.673 "get_zone_info": false, 00:39:07.673 "zone_management": false, 00:39:07.673 "zone_append": false, 00:39:07.673 "compare": false, 00:39:07.673 "compare_and_write": false, 00:39:07.673 "abort": false, 00:39:07.673 "seek_hole": true, 00:39:07.673 "seek_data": true, 00:39:07.673 "copy": false, 00:39:07.673 "nvme_iov_md": false 00:39:07.673 }, 00:39:07.673 "driver_specific": { 00:39:07.673 "lvol": { 00:39:07.673 "lvol_store_uuid": "63f5ee09-5d17-4f21-b787-5f4e01cb3e73", 00:39:07.673 "base_bdev": "aio_bdev", 00:39:07.673 "thin_provision": false, 00:39:07.673 "num_allocated_clusters": 38, 00:39:07.673 "snapshot": false, 00:39:07.673 "clone": false, 00:39:07.673 "esnap_clone": false 00:39:07.673 } 00:39:07.673 } 00:39:07.673 } 00:39:07.673 ] 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:07.673 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:07.932 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:07.932 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 9bed5646-509a-4357-a4fc-08f8d1fbeb18 00:39:08.191 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 63f5ee09-5d17-4f21-b787-5f4e01cb3e73 00:39:08.450 06:44:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:08.450 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:08.450 00:39:08.450 real 0m16.905s 00:39:08.450 user 0m34.333s 00:39:08.450 sys 0m3.782s 00:39:08.450 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:08.450 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:08.450 ************************************ 00:39:08.450 END TEST lvs_grow_dirty 00:39:08.450 ************************************ 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:08.709 nvmf_trace.0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:08.709 rmmod nvme_tcp 00:39:08.709 rmmod nvme_fabrics 00:39:08.709 rmmod nvme_keyring 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 1246460 ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 1246460 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1246460 ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1246460 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1246460 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1246460' 00:39:08.709 killing process with pid 1246460 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1246460 00:39:08.709 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1246460 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:08.968 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:08.969 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:08.969 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:08.969 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:08.969 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:08.969 06:45:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:11.062 00:39:11.062 real 0m41.697s 00:39:11.062 user 0m52.041s 00:39:11.062 sys 0m10.091s 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:11.062 ************************************ 00:39:11.062 END TEST nvmf_lvs_grow 00:39:11.062 ************************************ 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:11.062 ************************************ 00:39:11.062 START TEST nvmf_bdev_io_wait 00:39:11.062 ************************************ 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:11.062 * Looking for test storage... 00:39:11.062 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:11.062 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:11.321 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:11.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.322 --rc genhtml_branch_coverage=1 00:39:11.322 --rc genhtml_function_coverage=1 00:39:11.322 --rc genhtml_legend=1 00:39:11.322 --rc geninfo_all_blocks=1 00:39:11.322 --rc geninfo_unexecuted_blocks=1 00:39:11.322 00:39:11.322 ' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:11.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.322 --rc genhtml_branch_coverage=1 00:39:11.322 --rc genhtml_function_coverage=1 00:39:11.322 --rc genhtml_legend=1 00:39:11.322 --rc geninfo_all_blocks=1 00:39:11.322 --rc geninfo_unexecuted_blocks=1 00:39:11.322 00:39:11.322 ' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:11.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.322 --rc genhtml_branch_coverage=1 00:39:11.322 --rc genhtml_function_coverage=1 00:39:11.322 --rc genhtml_legend=1 00:39:11.322 --rc geninfo_all_blocks=1 00:39:11.322 --rc geninfo_unexecuted_blocks=1 00:39:11.322 00:39:11.322 ' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:11.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:11.322 --rc genhtml_branch_coverage=1 00:39:11.322 --rc genhtml_function_coverage=1 00:39:11.322 --rc genhtml_legend=1 00:39:11.322 --rc geninfo_all_blocks=1 00:39:11.322 --rc geninfo_unexecuted_blocks=1 00:39:11.322 00:39:11.322 ' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:11.322 06:45:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:17.895 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:17.895 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:17.895 Found net devices under 0000:af:00.0: cvl_0_0 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:17.895 Found net devices under 0000:af:00.1: cvl_0_1 00:39:17.895 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:17.896 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:17.896 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:39:17.896 00:39:17.896 --- 10.0.0.2 ping statistics --- 00:39:17.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.896 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:17.896 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:17.896 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:39:17.896 00:39:17.896 --- 10.0.0.1 ping statistics --- 00:39:17.896 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:17.896 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=1250581 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 1250581 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1250581 ']' 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:17.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.896 [2024-12-13 06:45:08.700441] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:17.896 [2024-12-13 06:45:08.701328] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:17.896 [2024-12-13 06:45:08.701360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:17.896 [2024-12-13 06:45:08.781696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:17.896 [2024-12-13 06:45:08.805527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:17.896 [2024-12-13 06:45:08.805565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:17.896 [2024-12-13 06:45:08.805572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:17.896 [2024-12-13 06:45:08.805578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:17.896 [2024-12-13 06:45:08.805582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:17.896 [2024-12-13 06:45:08.806992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:17.896 [2024-12-13 06:45:08.807110] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:17.896 [2024-12-13 06:45:08.807221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:17.896 [2024-12-13 06:45:08.807221] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:17.896 [2024-12-13 06:45:08.807560] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.896 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.896 [2024-12-13 06:45:08.939994] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:17.897 [2024-12-13 06:45:08.940872] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:17.897 [2024-12-13 06:45:08.940965] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:17.897 [2024-12-13 06:45:08.941099] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.897 [2024-12-13 06:45:08.947963] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.897 Malloc0 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.897 06:45:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:17.897 [2024-12-13 06:45:09.012026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1250722 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1250724 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1250726 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:17.897 { 00:39:17.897 "params": { 00:39:17.897 "name": "Nvme$subsystem", 00:39:17.897 "trtype": "$TEST_TRANSPORT", 00:39:17.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.897 "adrfam": "ipv4", 00:39:17.897 "trsvcid": "$NVMF_PORT", 00:39:17.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.897 "hdgst": ${hdgst:-false}, 00:39:17.897 "ddgst": ${ddgst:-false} 00:39:17.897 }, 00:39:17.897 "method": "bdev_nvme_attach_controller" 00:39:17.897 } 00:39:17.897 EOF 00:39:17.897 )") 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1250729 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:17.897 { 00:39:17.897 "params": { 00:39:17.897 "name": "Nvme$subsystem", 00:39:17.897 "trtype": "$TEST_TRANSPORT", 00:39:17.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.897 "adrfam": "ipv4", 00:39:17.897 "trsvcid": "$NVMF_PORT", 00:39:17.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.897 "hdgst": ${hdgst:-false}, 00:39:17.897 "ddgst": ${ddgst:-false} 00:39:17.897 }, 00:39:17.897 "method": "bdev_nvme_attach_controller" 00:39:17.897 } 00:39:17.897 EOF 00:39:17.897 )") 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:17.897 { 00:39:17.897 "params": { 00:39:17.897 "name": "Nvme$subsystem", 00:39:17.897 "trtype": "$TEST_TRANSPORT", 00:39:17.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.897 "adrfam": "ipv4", 00:39:17.897 "trsvcid": "$NVMF_PORT", 00:39:17.897 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.897 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.897 "hdgst": ${hdgst:-false}, 00:39:17.897 "ddgst": ${ddgst:-false} 00:39:17.897 }, 00:39:17.897 "method": "bdev_nvme_attach_controller" 00:39:17.897 } 00:39:17.897 EOF 00:39:17.897 )") 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:17.897 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:17.897 { 00:39:17.897 "params": { 00:39:17.897 "name": "Nvme$subsystem", 00:39:17.897 "trtype": "$TEST_TRANSPORT", 00:39:17.897 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:17.897 "adrfam": "ipv4", 00:39:17.897 "trsvcid": "$NVMF_PORT", 00:39:17.898 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:17.898 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:17.898 "hdgst": ${hdgst:-false}, 00:39:17.898 "ddgst": ${ddgst:-false} 00:39:17.898 }, 00:39:17.898 "method": "bdev_nvme_attach_controller" 00:39:17.898 } 00:39:17.898 EOF 00:39:17.898 )") 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1250722 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:17.898 "params": { 00:39:17.898 "name": "Nvme1", 00:39:17.898 "trtype": "tcp", 00:39:17.898 "traddr": "10.0.0.2", 00:39:17.898 "adrfam": "ipv4", 00:39:17.898 "trsvcid": "4420", 00:39:17.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:17.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:17.898 "hdgst": false, 00:39:17.898 "ddgst": false 00:39:17.898 }, 00:39:17.898 "method": "bdev_nvme_attach_controller" 00:39:17.898 }' 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:17.898 "params": { 00:39:17.898 "name": "Nvme1", 00:39:17.898 "trtype": "tcp", 00:39:17.898 "traddr": "10.0.0.2", 00:39:17.898 "adrfam": "ipv4", 00:39:17.898 "trsvcid": "4420", 00:39:17.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:17.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:17.898 "hdgst": false, 00:39:17.898 "ddgst": false 00:39:17.898 }, 00:39:17.898 "method": "bdev_nvme_attach_controller" 00:39:17.898 }' 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:17.898 "params": { 00:39:17.898 "name": "Nvme1", 00:39:17.898 "trtype": "tcp", 00:39:17.898 "traddr": "10.0.0.2", 00:39:17.898 "adrfam": "ipv4", 00:39:17.898 "trsvcid": "4420", 00:39:17.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:17.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:17.898 "hdgst": false, 00:39:17.898 "ddgst": false 00:39:17.898 }, 00:39:17.898 "method": "bdev_nvme_attach_controller" 00:39:17.898 }' 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:17.898 06:45:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:17.898 "params": { 00:39:17.898 "name": "Nvme1", 00:39:17.898 "trtype": "tcp", 00:39:17.898 "traddr": "10.0.0.2", 00:39:17.898 "adrfam": "ipv4", 00:39:17.898 "trsvcid": "4420", 00:39:17.898 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:17.898 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:17.898 "hdgst": false, 00:39:17.898 "ddgst": false 00:39:17.898 }, 00:39:17.898 "method": "bdev_nvme_attach_controller" 00:39:17.898 }' 00:39:17.898 [2024-12-13 06:45:09.064489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:17.898 [2024-12-13 06:45:09.064497] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:17.898 [2024-12-13 06:45:09.064515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:17.898 [2024-12-13 06:45:09.064544] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-13 06:45:09.064544] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:17.898 --proc-type=auto ] 00:39:17.898 [2024-12-13 06:45:09.064558] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:17.898 [2024-12-13 06:45:09.065965] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:17.898 [2024-12-13 06:45:09.066006] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:17.898 [2024-12-13 06:45:09.254999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.898 [2024-12-13 06:45:09.275390] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:17.898 [2024-12-13 06:45:09.321394] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.898 [2024-12-13 06:45:09.337066] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:17.898 [2024-12-13 06:45:09.381823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.898 [2024-12-13 06:45:09.396747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:17.898 [2024-12-13 06:45:09.482490] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:17.898 [2024-12-13 06:45:09.502176] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:18.157 Running I/O for 1 seconds... 00:39:18.157 Running I/O for 1 seconds... 00:39:18.157 Running I/O for 1 seconds... 00:39:18.157 Running I/O for 1 seconds... 00:39:19.095 8439.00 IOPS, 32.96 MiB/s 00:39:19.095 Latency(us) 00:39:19.095 [2024-12-13T05:45:10.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.095 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:19.095 Nvme1n1 : 1.02 8421.43 32.90 0.00 0.00 15047.68 3183.18 26089.57 00:39:19.095 [2024-12-13T05:45:10.749Z] =================================================================================================================== 00:39:19.095 [2024-12-13T05:45:10.749Z] Total : 8421.43 32.90 0.00 0.00 15047.68 3183.18 26089.57 00:39:19.095 242800.00 IOPS, 948.44 MiB/s 00:39:19.095 Latency(us) 00:39:19.095 [2024-12-13T05:45:10.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.095 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:19.095 Nvme1n1 : 1.00 242432.71 947.00 0.00 0.00 525.17 224.30 1497.97 00:39:19.095 [2024-12-13T05:45:10.749Z] =================================================================================================================== 00:39:19.095 [2024-12-13T05:45:10.749Z] Total : 242432.71 947.00 0.00 0.00 525.17 224.30 1497.97 00:39:19.095 7675.00 IOPS, 29.98 MiB/s 00:39:19.095 Latency(us) 00:39:19.095 [2024-12-13T05:45:10.749Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.095 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:19.095 Nvme1n1 : 1.01 7782.59 30.40 0.00 0.00 16399.65 4681.14 26089.57 00:39:19.095 [2024-12-13T05:45:10.749Z] =================================================================================================================== 00:39:19.095 [2024-12-13T05:45:10.749Z] Total : 7782.59 30.40 0.00 0.00 16399.65 4681.14 26089.57 00:39:19.354 13453.00 IOPS, 52.55 MiB/s 00:39:19.354 Latency(us) 00:39:19.354 [2024-12-13T05:45:11.008Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:19.354 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:19.354 Nvme1n1 : 1.01 13549.06 52.93 0.00 0.00 9425.95 3214.38 13793.77 00:39:19.354 [2024-12-13T05:45:11.008Z] =================================================================================================================== 00:39:19.354 [2024-12-13T05:45:11.008Z] Total : 13549.06 52.93 0.00 0.00 9425.95 3214.38 13793.77 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1250724 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1250726 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1250729 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:19.355 rmmod nvme_tcp 00:39:19.355 rmmod nvme_fabrics 00:39:19.355 rmmod nvme_keyring 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 1250581 ']' 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 1250581 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1250581 ']' 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1250581 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1250581 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1250581' 00:39:19.355 killing process with pid 1250581 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1250581 00:39:19.355 06:45:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1250581 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:19.614 06:45:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:22.153 00:39:22.153 real 0m10.590s 00:39:22.153 user 0m14.619s 00:39:22.153 sys 0m6.398s 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:22.153 ************************************ 00:39:22.153 END TEST nvmf_bdev_io_wait 00:39:22.153 ************************************ 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:22.153 ************************************ 00:39:22.153 START TEST nvmf_queue_depth 00:39:22.153 ************************************ 00:39:22.153 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:22.153 * Looking for test storage... 00:39:22.153 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.154 --rc genhtml_branch_coverage=1 00:39:22.154 --rc genhtml_function_coverage=1 00:39:22.154 --rc genhtml_legend=1 00:39:22.154 --rc geninfo_all_blocks=1 00:39:22.154 --rc geninfo_unexecuted_blocks=1 00:39:22.154 00:39:22.154 ' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.154 --rc genhtml_branch_coverage=1 00:39:22.154 --rc genhtml_function_coverage=1 00:39:22.154 --rc genhtml_legend=1 00:39:22.154 --rc geninfo_all_blocks=1 00:39:22.154 --rc geninfo_unexecuted_blocks=1 00:39:22.154 00:39:22.154 ' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.154 --rc genhtml_branch_coverage=1 00:39:22.154 --rc genhtml_function_coverage=1 00:39:22.154 --rc genhtml_legend=1 00:39:22.154 --rc geninfo_all_blocks=1 00:39:22.154 --rc geninfo_unexecuted_blocks=1 00:39:22.154 00:39:22.154 ' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:22.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:22.154 --rc genhtml_branch_coverage=1 00:39:22.154 --rc genhtml_function_coverage=1 00:39:22.154 --rc genhtml_legend=1 00:39:22.154 --rc geninfo_all_blocks=1 00:39:22.154 --rc geninfo_unexecuted_blocks=1 00:39:22.154 00:39:22.154 ' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.154 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:22.155 06:45:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:27.442 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:27.442 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:27.442 Found net devices under 0000:af:00.0: cvl_0_0 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.442 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:27.443 Found net devices under 0000:af:00.1: cvl_0_1 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:27.443 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:27.702 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:27.702 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:39:27.702 00:39:27.702 --- 10.0.0.2 ping statistics --- 00:39:27.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.702 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:27.702 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:27.702 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:39:27.702 00:39:27.702 --- 10.0.0.1 ping statistics --- 00:39:27.702 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:27.702 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:27.702 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:27.961 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:27.961 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:27.961 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:27.961 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:27.961 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=1254755 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 1254755 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254755 ']' 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:27.962 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:27.962 [2024-12-13 06:45:19.420155] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:27.962 [2024-12-13 06:45:19.421078] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:27.962 [2024-12-13 06:45:19.421115] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:27.962 [2024-12-13 06:45:19.504511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:27.962 [2024-12-13 06:45:19.525526] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:27.962 [2024-12-13 06:45:19.525560] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:27.962 [2024-12-13 06:45:19.525567] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:27.962 [2024-12-13 06:45:19.525573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:27.962 [2024-12-13 06:45:19.525578] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:27.962 [2024-12-13 06:45:19.526042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:27.962 [2024-12-13 06:45:19.588980] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:27.962 [2024-12-13 06:45:19.589183] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:27.962 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 [2024-12-13 06:45:19.654783] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 Malloc0 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.221 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.221 [2024-12-13 06:45:19.726773] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1254918 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1254918 /var/tmp/bdevperf.sock 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1254918 ']' 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:28.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:28.222 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.222 [2024-12-13 06:45:19.777749] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:28.222 [2024-12-13 06:45:19.777792] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1254918 ] 00:39:28.222 [2024-12-13 06:45:19.853863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:28.480 [2024-12-13 06:45:19.876491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:28.480 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:28.480 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:28.480 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:28.480 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:28.480 06:45:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:28.480 NVMe0n1 00:39:28.480 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:28.480 06:45:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:28.739 Running I/O for 10 seconds... 00:39:30.614 12288.00 IOPS, 48.00 MiB/s [2024-12-13T05:45:23.204Z] 12288.00 IOPS, 48.00 MiB/s [2024-12-13T05:45:24.583Z] 12374.00 IOPS, 48.34 MiB/s [2024-12-13T05:45:25.520Z] 12437.00 IOPS, 48.58 MiB/s [2024-12-13T05:45:26.456Z] 12450.20 IOPS, 48.63 MiB/s [2024-12-13T05:45:27.393Z] 12477.50 IOPS, 48.74 MiB/s [2024-12-13T05:45:28.329Z] 12510.14 IOPS, 48.87 MiB/s [2024-12-13T05:45:29.266Z] 12541.62 IOPS, 48.99 MiB/s [2024-12-13T05:45:30.203Z] 12580.33 IOPS, 49.14 MiB/s [2024-12-13T05:45:30.463Z] 12596.50 IOPS, 49.21 MiB/s 00:39:38.809 Latency(us) 00:39:38.809 [2024-12-13T05:45:30.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.809 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:38.809 Verification LBA range: start 0x0 length 0x4000 00:39:38.809 NVMe0n1 : 10.06 12625.07 49.32 0.00 0.00 80852.93 19223.89 55175.07 00:39:38.809 [2024-12-13T05:45:30.463Z] =================================================================================================================== 00:39:38.809 [2024-12-13T05:45:30.463Z] Total : 12625.07 49.32 0.00 0.00 80852.93 19223.89 55175.07 00:39:38.809 { 00:39:38.809 "results": [ 00:39:38.809 { 00:39:38.809 "job": "NVMe0n1", 00:39:38.809 "core_mask": "0x1", 00:39:38.809 "workload": "verify", 00:39:38.809 "status": "finished", 00:39:38.809 "verify_range": { 00:39:38.809 "start": 0, 00:39:38.809 "length": 16384 00:39:38.809 }, 00:39:38.809 "queue_depth": 1024, 00:39:38.809 "io_size": 4096, 00:39:38.809 "runtime": 10.058478, 00:39:38.809 "iops": 12625.071109167808, 00:39:38.809 "mibps": 49.31668402018675, 00:39:38.809 "io_failed": 0, 00:39:38.809 "io_timeout": 0, 00:39:38.809 "avg_latency_us": 80852.92768170022, 00:39:38.809 "min_latency_us": 19223.893333333333, 00:39:38.809 "max_latency_us": 55175.07047619048 00:39:38.809 } 00:39:38.809 ], 00:39:38.809 "core_count": 1 00:39:38.809 } 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1254918 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254918 ']' 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254918 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254918 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254918' 00:39:38.809 killing process with pid 1254918 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254918 00:39:38.809 Received shutdown signal, test time was about 10.000000 seconds 00:39:38.809 00:39:38.809 Latency(us) 00:39:38.809 [2024-12-13T05:45:30.463Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.809 [2024-12-13T05:45:30.463Z] =================================================================================================================== 00:39:38.809 [2024-12-13T05:45:30.463Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:38.809 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254918 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:39.068 rmmod nvme_tcp 00:39:39.068 rmmod nvme_fabrics 00:39:39.068 rmmod nvme_keyring 00:39:39.068 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 1254755 ']' 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 1254755 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1254755 ']' 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1254755 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1254755 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1254755' 00:39:39.069 killing process with pid 1254755 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1254755 00:39:39.069 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1254755 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.328 06:45:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.233 00:39:41.233 real 0m19.571s 00:39:41.233 user 0m22.659s 00:39:41.233 sys 0m6.149s 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:41.233 ************************************ 00:39:41.233 END TEST nvmf_queue_depth 00:39:41.233 ************************************ 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:41.233 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:41.493 ************************************ 00:39:41.493 START TEST nvmf_target_multipath 00:39:41.493 ************************************ 00:39:41.493 06:45:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:39:41.493 * Looking for test storage... 00:39:41.493 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.493 --rc genhtml_branch_coverage=1 00:39:41.493 --rc genhtml_function_coverage=1 00:39:41.493 --rc genhtml_legend=1 00:39:41.493 --rc geninfo_all_blocks=1 00:39:41.493 --rc geninfo_unexecuted_blocks=1 00:39:41.493 00:39:41.493 ' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.493 --rc genhtml_branch_coverage=1 00:39:41.493 --rc genhtml_function_coverage=1 00:39:41.493 --rc genhtml_legend=1 00:39:41.493 --rc geninfo_all_blocks=1 00:39:41.493 --rc geninfo_unexecuted_blocks=1 00:39:41.493 00:39:41.493 ' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.493 --rc genhtml_branch_coverage=1 00:39:41.493 --rc genhtml_function_coverage=1 00:39:41.493 --rc genhtml_legend=1 00:39:41.493 --rc geninfo_all_blocks=1 00:39:41.493 --rc geninfo_unexecuted_blocks=1 00:39:41.493 00:39:41.493 ' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:41.493 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.493 --rc genhtml_branch_coverage=1 00:39:41.493 --rc genhtml_function_coverage=1 00:39:41.493 --rc genhtml_legend=1 00:39:41.493 --rc geninfo_all_blocks=1 00:39:41.493 --rc geninfo_unexecuted_blocks=1 00:39:41.493 00:39:41.493 ' 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.493 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:39:41.494 06:45:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:48.065 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:48.065 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:48.065 Found net devices under 0000:af:00.0: cvl_0_0 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:48.065 Found net devices under 0000:af:00.1: cvl_0_1 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:48.065 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:48.066 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:48.066 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:39:48.066 00:39:48.066 --- 10.0.0.2 ping statistics --- 00:39:48.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.066 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:48.066 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:48.066 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:39:48.066 00:39:48.066 --- 10.0.0.1 ping statistics --- 00:39:48.066 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.066 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:39:48.066 only one NIC for nvmf test 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:48.066 rmmod nvme_tcp 00:39:48.066 rmmod nvme_fabrics 00:39:48.066 rmmod nvme_keyring 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:48.066 06:45:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:48.066 06:45:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:49.443 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:49.702 00:39:49.702 real 0m8.197s 00:39:49.702 user 0m1.827s 00:39:49.702 sys 0m4.373s 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:39:49.702 ************************************ 00:39:49.702 END TEST nvmf_target_multipath 00:39:49.702 ************************************ 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:49.702 ************************************ 00:39:49.702 START TEST nvmf_zcopy 00:39:49.702 ************************************ 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:39:49.702 * Looking for test storage... 00:39:49.702 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:49.702 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:49.703 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:49.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.962 --rc genhtml_branch_coverage=1 00:39:49.962 --rc genhtml_function_coverage=1 00:39:49.962 --rc genhtml_legend=1 00:39:49.962 --rc geninfo_all_blocks=1 00:39:49.962 --rc geninfo_unexecuted_blocks=1 00:39:49.962 00:39:49.962 ' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:49.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.962 --rc genhtml_branch_coverage=1 00:39:49.962 --rc genhtml_function_coverage=1 00:39:49.962 --rc genhtml_legend=1 00:39:49.962 --rc geninfo_all_blocks=1 00:39:49.962 --rc geninfo_unexecuted_blocks=1 00:39:49.962 00:39:49.962 ' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:49.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.962 --rc genhtml_branch_coverage=1 00:39:49.962 --rc genhtml_function_coverage=1 00:39:49.962 --rc genhtml_legend=1 00:39:49.962 --rc geninfo_all_blocks=1 00:39:49.962 --rc geninfo_unexecuted_blocks=1 00:39:49.962 00:39:49.962 ' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:49.962 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:49.962 --rc genhtml_branch_coverage=1 00:39:49.962 --rc genhtml_function_coverage=1 00:39:49.962 --rc genhtml_legend=1 00:39:49.962 --rc geninfo_all_blocks=1 00:39:49.962 --rc geninfo_unexecuted_blocks=1 00:39:49.962 00:39:49.962 ' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:39:49.962 06:45:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:56.531 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:56.532 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:56.532 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:56.532 Found net devices under 0000:af:00.0: cvl_0_0 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:56.532 Found net devices under 0000:af:00.1: cvl_0_1 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:56.532 06:45:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:56.532 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:56.532 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.310 ms 00:39:56.532 00:39:56.532 --- 10.0.0.2 ping statistics --- 00:39:56.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.532 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:56.532 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:56.532 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:39:56.532 00:39:56.532 --- 10.0.0.1 ping statistics --- 00:39:56.532 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:56.532 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=1263396 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 1263396 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1263396 ']' 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:56.532 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:56.532 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.532 [2024-12-13 06:45:47.260281] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:56.532 [2024-12-13 06:45:47.261246] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:56.532 [2024-12-13 06:45:47.261283] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:56.532 [2024-12-13 06:45:47.340702] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.532 [2024-12-13 06:45:47.361895] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:56.532 [2024-12-13 06:45:47.361929] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:56.532 [2024-12-13 06:45:47.361940] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:56.533 [2024-12-13 06:45:47.361945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:56.533 [2024-12-13 06:45:47.361950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:56.533 [2024-12-13 06:45:47.362386] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:56.533 [2024-12-13 06:45:47.424983] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:56.533 [2024-12-13 06:45:47.425192] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 [2024-12-13 06:45:47.503068] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 [2024-12-13 06:45:47.531350] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 malloc0 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:56.533 { 00:39:56.533 "params": { 00:39:56.533 "name": "Nvme$subsystem", 00:39:56.533 "trtype": "$TEST_TRANSPORT", 00:39:56.533 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:56.533 "adrfam": "ipv4", 00:39:56.533 "trsvcid": "$NVMF_PORT", 00:39:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:56.533 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:56.533 "hdgst": ${hdgst:-false}, 00:39:56.533 "ddgst": ${ddgst:-false} 00:39:56.533 }, 00:39:56.533 "method": "bdev_nvme_attach_controller" 00:39:56.533 } 00:39:56.533 EOF 00:39:56.533 )") 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:39:56.533 06:45:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:56.533 "params": { 00:39:56.533 "name": "Nvme1", 00:39:56.533 "trtype": "tcp", 00:39:56.533 "traddr": "10.0.0.2", 00:39:56.533 "adrfam": "ipv4", 00:39:56.533 "trsvcid": "4420", 00:39:56.533 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:56.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:56.533 "hdgst": false, 00:39:56.533 "ddgst": false 00:39:56.533 }, 00:39:56.533 "method": "bdev_nvme_attach_controller" 00:39:56.533 }' 00:39:56.533 [2024-12-13 06:45:47.632604] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:39:56.533 [2024-12-13 06:45:47.632653] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1263426 ] 00:39:56.533 [2024-12-13 06:45:47.706103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:56.533 [2024-12-13 06:45:47.729096] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:56.533 Running I/O for 10 seconds... 00:39:58.406 8552.00 IOPS, 66.81 MiB/s [2024-12-13T05:45:51.445Z] 8613.50 IOPS, 67.29 MiB/s [2024-12-13T05:45:52.012Z] 8591.00 IOPS, 67.12 MiB/s [2024-12-13T05:45:53.390Z] 8626.50 IOPS, 67.39 MiB/s [2024-12-13T05:45:54.326Z] 8648.80 IOPS, 67.57 MiB/s [2024-12-13T05:45:55.272Z] 8665.00 IOPS, 67.70 MiB/s [2024-12-13T05:45:56.210Z] 8675.43 IOPS, 67.78 MiB/s [2024-12-13T05:45:57.147Z] 8686.38 IOPS, 67.86 MiB/s [2024-12-13T05:45:58.083Z] 8697.00 IOPS, 67.95 MiB/s [2024-12-13T05:45:58.083Z] 8701.70 IOPS, 67.98 MiB/s 00:40:06.429 Latency(us) 00:40:06.429 [2024-12-13T05:45:58.083Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:06.429 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:06.429 Verification LBA range: start 0x0 length 0x1000 00:40:06.429 Nvme1n1 : 10.01 8705.75 68.01 0.00 0.00 14661.12 526.63 21096.35 00:40:06.429 [2024-12-13T05:45:58.083Z] =================================================================================================================== 00:40:06.429 [2024-12-13T05:45:58.083Z] Total : 8705.75 68.01 0.00 0.00 14661.12 526.63 21096.35 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1264981 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:06.689 { 00:40:06.689 "params": { 00:40:06.689 "name": "Nvme$subsystem", 00:40:06.689 "trtype": "$TEST_TRANSPORT", 00:40:06.689 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:06.689 "adrfam": "ipv4", 00:40:06.689 "trsvcid": "$NVMF_PORT", 00:40:06.689 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:06.689 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:06.689 "hdgst": ${hdgst:-false}, 00:40:06.689 "ddgst": ${ddgst:-false} 00:40:06.689 }, 00:40:06.689 "method": "bdev_nvme_attach_controller" 00:40:06.689 } 00:40:06.689 EOF 00:40:06.689 )") 00:40:06.689 [2024-12-13 06:45:58.190742] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.190776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:06.689 06:45:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:06.689 "params": { 00:40:06.689 "name": "Nvme1", 00:40:06.689 "trtype": "tcp", 00:40:06.689 "traddr": "10.0.0.2", 00:40:06.689 "adrfam": "ipv4", 00:40:06.689 "trsvcid": "4420", 00:40:06.689 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:06.689 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:06.689 "hdgst": false, 00:40:06.689 "ddgst": false 00:40:06.689 }, 00:40:06.689 "method": "bdev_nvme_attach_controller" 00:40:06.689 }' 00:40:06.689 [2024-12-13 06:45:58.202700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.202713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.214695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.214704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.226695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.226705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.230766] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:06.689 [2024-12-13 06:45:58.230809] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1264981 ] 00:40:06.689 [2024-12-13 06:45:58.238693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.238703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.250693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.250704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.262692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.262707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.274695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.274704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.286695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.286705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.298695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.298703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.305942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:06.689 [2024-12-13 06:45:58.310694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.310705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.322697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.322711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.689 [2024-12-13 06:45:58.328180] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:06.689 [2024-12-13 06:45:58.334704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.689 [2024-12-13 06:45:58.334716] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.346711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.346729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.358703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.358728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.370698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.370715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.382699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.382712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.394698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.394713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.406717] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.406738] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.418705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.418722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.430701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.430717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.442698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.442710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.454696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.454706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.466695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.466704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.478708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.478726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.490696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.490709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.502704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.502713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.514697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.514706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.526695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.526706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.538698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.538711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.550693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.550702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.562694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.562703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.574700] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.574724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.586695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.586706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:06.949 [2024-12-13 06:45:58.598695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:06.949 [2024-12-13 06:45:58.598705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.610697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.610709] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.622831] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.622849] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.634699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.634711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 Running I/O for 5 seconds... 00:40:07.253 [2024-12-13 06:45:58.649873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.649893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.664858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.664876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.679299] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.679318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.694152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.253 [2024-12-13 06:45:58.694170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.253 [2024-12-13 06:45:58.708660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.708693] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.723277] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.723294] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.733707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.733725] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.748073] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.748091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.762179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.762196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.775662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.775680] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.790896] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.790914] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.801805] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.801823] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.816625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.816643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.830893] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.830911] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.841690] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.841712] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.856382] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.856400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.870663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.870682] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.883413] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.883433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.254 [2024-12-13 06:45:58.898512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.254 [2024-12-13 06:45:58.898531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.912697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.912715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.927434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.927459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.942309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.942329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.955745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.955767] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.966958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.966975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.980882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.980900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:58.995866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:58.995884] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.011040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.011058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.026378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.026397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.040166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.040185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.055347] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.055365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.070943] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.070961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.083404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.083422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.098826] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.098844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.112665] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.112683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.127239] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.127257] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.139375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.139393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.154250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.154269] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.168702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.168720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.539 [2024-12-13 06:45:59.183136] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.539 [2024-12-13 06:45:59.183154] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.198791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.198809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.212166] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.212184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.226920] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.226939] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.237283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.237300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.252071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.252092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.266865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.266883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.279685] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.279705] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.294582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.820 [2024-12-13 06:45:59.294600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.820 [2024-12-13 06:45:59.307968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.307985] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.322355] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.322374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.335468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.335487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.347992] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.348009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.358804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.358822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.372227] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.372246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.386730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.386748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.397513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.397530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.412207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.412224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.426875] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.426892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.440319] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.440337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.454890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.454908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:07.821 [2024-12-13 06:45:59.465392] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:07.821 [2024-12-13 06:45:59.465410] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.480106] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.480125] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.494578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.494604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.507563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.507581] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.522349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.522367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.536300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.536317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.551232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.551249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.566889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.566906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.580743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.580761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.595126] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.595144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.610839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.610859] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.623197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.623215] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.638986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.639004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 16927.00 IOPS, 132.24 MiB/s [2024-12-13T05:45:59.734Z] [2024-12-13 06:45:59.653041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.653060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.667571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.667590] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.682640] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.682661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.696475] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.696494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.711433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.711458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.080 [2024-12-13 06:45:59.725855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.080 [2024-12-13 06:45:59.725874] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.740388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.740407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.754881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.754899] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.767339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.767362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.780378] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.780397] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.794798] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.794817] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.807543] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.807562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.819868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.819886] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.830899] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.830917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.844984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.845003] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.859515] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.859533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.874771] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.874789] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.886028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.886046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.900435] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.900459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.914772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.914790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.925796] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.925814] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.940330] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.940348] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.954912] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.954930] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.965955] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.965973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.339 [2024-12-13 06:45:59.980486] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.339 [2024-12-13 06:45:59.980504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:45:59.995040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:45:59.995058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.012887] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.012907] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.027480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.027504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.043152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.043172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.058335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.058355] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.072127] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.072147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.087153] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.087172] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.102381] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.102400] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.118119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.118138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.131593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.131611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.146407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.146426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.598 [2024-12-13 06:46:00.159429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.598 [2024-12-13 06:46:00.159452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.175389] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.175407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.190909] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.190928] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.203869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.203887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.218981] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.219000] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.234688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.234706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.599 [2024-12-13 06:46:00.246072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.599 [2024-12-13 06:46:00.246090] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.260779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.260798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.274999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.275016] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.290417] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.290436] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.304767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.304786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.319031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.319049] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.334663] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.334681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.348067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.348086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.362886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.362904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.373266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.373284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.388305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.388323] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.402839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.402857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.415625] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.415643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.430431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.430456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.441476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.441494] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.456623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.456642] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.471213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.471231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.486751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.486770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:08.858 [2024-12-13 06:46:00.500079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:08.858 [2024-12-13 06:46:00.500098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.515050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.515068] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.530794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.530812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.544385] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.544403] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.559121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.559139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.574881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.574900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.588935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.588953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.603648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.603666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.618011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.618029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.632035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.632054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 16872.50 IOPS, 131.82 MiB/s [2024-12-13T05:46:00.771Z] [2024-12-13 06:46:00.646890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.646909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.660237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.660255] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.675220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.675237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.690429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.690452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.704779] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.704798] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.719314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.719332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.734397] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.734415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.748905] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.748924] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.117 [2024-12-13 06:46:00.763791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.117 [2024-12-13 06:46:00.763809] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.779013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.779031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.794781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.794801] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.807689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.807708] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.822300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.822319] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.836578] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.836601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.851459] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.851476] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.866311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.866329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.880695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.880713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.895063] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.895080] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.908314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.908332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.922938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.922955] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.933961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.933979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.948728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.948745] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.963258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.963276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.978471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.978489] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:00.992525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:00.992543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:01.007376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:01.007394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.376 [2024-12-13 06:46:01.022894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.376 [2024-12-13 06:46:01.022912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.033845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.033863] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.048878] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.048896] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.063691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.063720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.078556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.078575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.092848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.092867] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.107563] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.107587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.122332] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.122352] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.134708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.134728] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.148739] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.148757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.163329] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.163347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.178609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.178627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.192973] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.192991] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.207822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.207842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.222617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.222637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.234938] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.234957] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.248555] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.248573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.263074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.263093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.635 [2024-12-13 06:46:01.278649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.635 [2024-12-13 06:46:01.278667] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.291628] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.291646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.306525] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.306545] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.319008] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.319026] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.332100] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.332118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.343010] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.343027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.358279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.358297] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.371962] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.371984] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.386499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.386517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.397655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.397674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.412662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.412681] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.427290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.427309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.441679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.441697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.456237] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.456256] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.470454] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.470473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.484537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.484556] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.499188] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.499207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.514611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.514628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.527539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.527557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:09.894 [2024-12-13 06:46:01.542586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:09.894 [2024-12-13 06:46:01.542604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.556119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.556137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.570699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.570718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.583918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.583936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.595061] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.595079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.608957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.608975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.623857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.623875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 [2024-12-13 06:46:01.637847] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.153 [2024-12-13 06:46:01.637870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.153 16885.67 IOPS, 131.92 MiB/s [2024-12-13T05:46:01.807Z] [2024-12-13 06:46:01.652293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.652311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.667212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.667231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.679509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.679527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.694637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.694654] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.708676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.708694] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.723446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.723471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.738351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.738369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.752275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.752293] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.766632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.766649] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.780391] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.780409] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.794944] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.794962] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.154 [2024-12-13 06:46:01.805281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.154 [2024-12-13 06:46:01.805300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.820207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.820225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.834514] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.834532] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.847976] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.847993] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.862333] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.862351] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.876865] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.876883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.891357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.891374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.907096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.907113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.922407] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.922426] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.936222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.936240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.951172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.951190] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.966465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.966482] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.979081] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.979098] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:01.992989] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:01.993007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:02.007719] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:02.007737] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:02.022286] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:02.022304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:02.035004] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:02.035021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:02.048055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:02.048072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.413 [2024-12-13 06:46:02.059119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.413 [2024-12-13 06:46:02.059136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.074121] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.074139] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.088494] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.088512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.102933] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.102951] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.114152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.114170] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.129006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.129024] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.143794] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.143812] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.158799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.158818] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.172349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.172367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.187068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.187086] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.202592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.202611] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.216055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.216073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.227143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.227161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.240531] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.240549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.255260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.255277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.270306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.270324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.284324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.284341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.299087] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.299104] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.314609] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.314627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.672 [2024-12-13 06:46:02.326674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.672 [2024-12-13 06:46:02.326692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.931 [2024-12-13 06:46:02.340472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.340490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.355193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.355211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.370867] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.370885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.384462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.384496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.398994] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.399011] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.414320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.414339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.428586] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.428604] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.443674] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.443692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.458481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.458500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.469291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.469309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.483814] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.483832] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.498611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.498629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.512651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.512668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.527075] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.527093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.539403] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.539422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.554699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.554718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.568471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.568490] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:10.932 [2024-12-13 06:46:02.583029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:10.932 [2024-12-13 06:46:02.583047] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.598587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.598607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.612890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.612908] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.627554] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.627573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.642757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.642776] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 16892.00 IOPS, 131.97 MiB/s [2024-12-13T05:46:02.845Z] [2024-12-13 06:46:02.656620] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.656640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.671030] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.671048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.686522] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.686547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.699481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.699504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.714240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.714259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.728070] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.728088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.742716] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.742734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.755370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.755389] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.770654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.770673] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.784654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.784672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.799342] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.799361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.814679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.814698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.828056] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.828074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.191 [2024-12-13 06:46:02.842934] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.191 [2024-12-13 06:46:02.842953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.856956] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.856975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.871250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.871268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.886725] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.886743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.899311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.899328] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.912662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.912683] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.927556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.927574] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.942051] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.942070] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.955362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.955381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.971281] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.971303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:02.985874] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:02.985892] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.000425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.000443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.014918] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.014936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.026124] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.026142] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.040069] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.040087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.054691] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.054719] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.067225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.067242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.082677] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.082695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.450 [2024-12-13 06:46:03.096290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.450 [2024-12-13 06:46:03.096308] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.110898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.110916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.123444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.123467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.136072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.136089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.150599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.150618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.164160] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.164178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.178839] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.178857] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.190259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.190276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.204314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.204333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.218986] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.219004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.234267] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.234289] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.248669] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.248687] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.263268] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.263286] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.278540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.278558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.292613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.709 [2024-12-13 06:46:03.292630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.709 [2024-12-13 06:46:03.307140] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.710 [2024-12-13 06:46:03.307157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.710 [2024-12-13 06:46:03.322466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.710 [2024-12-13 06:46:03.322501] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.710 [2024-12-13 06:46:03.336196] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.710 [2024-12-13 06:46:03.336214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.710 [2024-12-13 06:46:03.351007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.710 [2024-12-13 06:46:03.351025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.366433] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.366459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.379167] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.379184] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.392471] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.392488] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.407673] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.407690] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.422813] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.422831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.436349] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.436368] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.451029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.451046] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.462290] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.462309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.476528] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.476547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.491524] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.491541] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.507071] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.507088] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.519431] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.519455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.534039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.534057] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.547592] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.547610] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.562653] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.562670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.575388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.575405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.590637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.590655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.603954] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.603972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:11.969 [2024-12-13 06:46:03.619175] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:11.969 [2024-12-13 06:46:03.619192] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.634424] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.634442] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.648688] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.648706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 16899.00 IOPS, 132.02 MiB/s 00:40:12.228 Latency(us) 00:40:12.228 [2024-12-13T05:46:03.882Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:12.228 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:12.228 Nvme1n1 : 5.01 16903.16 132.06 0.00 0.00 7565.84 1997.29 13731.35 00:40:12.228 [2024-12-13T05:46:03.882Z] =================================================================================================================== 00:40:12.228 [2024-12-13T05:46:03.882Z] Total : 16903.16 132.06 0.00 0.00 7565.84 1997.29 13731.35 00:40:12.228 [2024-12-13 06:46:03.658701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.658718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.670699] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.670715] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.682715] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.682734] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.694702] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.694718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.706704] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.706720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.718696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.718713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.730697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.730722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.742697] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.742722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.754696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.754710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.766693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.766703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.778698] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.778722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.790695] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.790717] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 [2024-12-13 06:46:03.802692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:12.228 [2024-12-13 06:46:03.802701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:12.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1264981) - No such process 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1264981 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.229 delay0 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:12.229 06:46:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:12.487 [2024-12-13 06:46:03.992576] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:19.052 Initializing NVMe Controllers 00:40:19.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:19.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:19.052 Initialization complete. Launching workers. 00:40:19.052 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 1184 00:40:19.052 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1457, failed to submit 47 00:40:19.052 success 1314, unsuccessful 143, failed 0 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:19.052 rmmod nvme_tcp 00:40:19.052 rmmod nvme_fabrics 00:40:19.052 rmmod nvme_keyring 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 1263396 ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1263396 ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1263396' 00:40:19.052 killing process with pid 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1263396 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:19.052 06:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:21.587 00:40:21.587 real 0m31.456s 00:40:21.587 user 0m41.037s 00:40:21.587 sys 0m12.152s 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:21.587 ************************************ 00:40:21.587 END TEST nvmf_zcopy 00:40:21.587 ************************************ 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:21.587 ************************************ 00:40:21.587 START TEST nvmf_nmic 00:40:21.587 ************************************ 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:21.587 * Looking for test storage... 00:40:21.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:21.587 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.588 --rc genhtml_branch_coverage=1 00:40:21.588 --rc genhtml_function_coverage=1 00:40:21.588 --rc genhtml_legend=1 00:40:21.588 --rc geninfo_all_blocks=1 00:40:21.588 --rc geninfo_unexecuted_blocks=1 00:40:21.588 00:40:21.588 ' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.588 --rc genhtml_branch_coverage=1 00:40:21.588 --rc genhtml_function_coverage=1 00:40:21.588 --rc genhtml_legend=1 00:40:21.588 --rc geninfo_all_blocks=1 00:40:21.588 --rc geninfo_unexecuted_blocks=1 00:40:21.588 00:40:21.588 ' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.588 --rc genhtml_branch_coverage=1 00:40:21.588 --rc genhtml_function_coverage=1 00:40:21.588 --rc genhtml_legend=1 00:40:21.588 --rc geninfo_all_blocks=1 00:40:21.588 --rc geninfo_unexecuted_blocks=1 00:40:21.588 00:40:21.588 ' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:21.588 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:21.588 --rc genhtml_branch_coverage=1 00:40:21.588 --rc genhtml_function_coverage=1 00:40:21.588 --rc genhtml_legend=1 00:40:21.588 --rc geninfo_all_blocks=1 00:40:21.588 --rc geninfo_unexecuted_blocks=1 00:40:21.588 00:40:21.588 ' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:21.588 06:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:26.862 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:27.121 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:27.121 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:27.121 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:27.121 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:27.121 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:27.122 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:27.122 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:27.122 Found net devices under 0000:af:00.0: cvl_0_0 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:27.122 Found net devices under 0000:af:00.1: cvl_0_1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:27.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:27.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.286 ms 00:40:27.122 00:40:27.122 --- 10.0.0.2 ping statistics --- 00:40:27.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.122 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:27.122 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:27.122 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:40:27.122 00:40:27.122 --- 10.0.0.1 ping statistics --- 00:40:27.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:27.122 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:27.122 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=1270415 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 1270415 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1270415 ']' 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:27.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:27.382 06:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.382 [2024-12-13 06:46:18.858047] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:27.382 [2024-12-13 06:46:18.858937] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:27.382 [2024-12-13 06:46:18.858972] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:27.382 [2024-12-13 06:46:18.938018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:27.382 [2024-12-13 06:46:18.961480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:27.382 [2024-12-13 06:46:18.961518] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:27.382 [2024-12-13 06:46:18.961525] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:27.382 [2024-12-13 06:46:18.961531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:27.382 [2024-12-13 06:46:18.961536] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:27.382 [2024-12-13 06:46:18.962967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:27.382 [2024-12-13 06:46:18.963074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:27.382 [2024-12-13 06:46:18.963101] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.382 [2024-12-13 06:46:18.963102] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:27.382 [2024-12-13 06:46:19.025940] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:27.382 [2024-12-13 06:46:19.026753] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:27.382 [2024-12-13 06:46:19.027029] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:27.382 [2024-12-13 06:46:19.027466] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:27.382 [2024-12-13 06:46:19.027503] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:27.642 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:27.642 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:27.642 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:27.642 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:27.642 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 [2024-12-13 06:46:19.100086] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 Malloc0 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 [2024-12-13 06:46:19.184356] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:27.643 test case1: single bdev can't be used in multiple subsystems 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 [2024-12-13 06:46:19.215782] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:27.643 [2024-12-13 06:46:19.215801] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:27.643 [2024-12-13 06:46:19.215808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.643 request: 00:40:27.643 { 00:40:27.643 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:27.643 "namespace": { 00:40:27.643 "bdev_name": "Malloc0", 00:40:27.643 "no_auto_visible": false, 00:40:27.643 "hide_metadata": false 00:40:27.643 }, 00:40:27.643 "method": "nvmf_subsystem_add_ns", 00:40:27.643 "req_id": 1 00:40:27.643 } 00:40:27.643 Got JSON-RPC error response 00:40:27.643 response: 00:40:27.643 { 00:40:27.643 "code": -32602, 00:40:27.643 "message": "Invalid parameters" 00:40:27.643 } 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:27.643 Adding namespace failed - expected result. 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:27.643 test case2: host connect to nvmf target in multiple paths 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:27.643 [2024-12-13 06:46:19.227879] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:27.643 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:27.902 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:28.161 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:28.161 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:28.161 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:28.161 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:28.161 06:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:30.694 06:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:30.694 [global] 00:40:30.694 thread=1 00:40:30.694 invalidate=1 00:40:30.694 rw=write 00:40:30.694 time_based=1 00:40:30.694 runtime=1 00:40:30.694 ioengine=libaio 00:40:30.694 direct=1 00:40:30.694 bs=4096 00:40:30.694 iodepth=1 00:40:30.694 norandommap=0 00:40:30.694 numjobs=1 00:40:30.694 00:40:30.694 verify_dump=1 00:40:30.694 verify_backlog=512 00:40:30.694 verify_state_save=0 00:40:30.694 do_verify=1 00:40:30.694 verify=crc32c-intel 00:40:30.694 [job0] 00:40:30.694 filename=/dev/nvme0n1 00:40:30.694 Could not set queue depth (nvme0n1) 00:40:30.694 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:30.694 fio-3.35 00:40:30.694 Starting 1 thread 00:40:31.630 00:40:31.630 job0: (groupid=0, jobs=1): err= 0: pid=1271050: Fri Dec 13 06:46:23 2024 00:40:31.630 read: IOPS=797, BW=3190KiB/s (3266kB/s)(3228KiB/1012msec) 00:40:31.630 slat (nsec): min=6543, max=30158, avg=7708.01, stdev=2544.57 00:40:31.630 clat (usec): min=182, max=41847, avg=981.98, stdev=5516.67 00:40:31.630 lat (usec): min=189, max=41870, avg=989.68, stdev=5518.58 00:40:31.630 clat percentiles (usec): 00:40:31.630 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 192], 00:40:31.630 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 245], 60.00th=[ 249], 00:40:31.630 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 255], 95.00th=[ 260], 00:40:31.630 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:40:31.630 | 99.99th=[41681] 00:40:31.630 write: IOPS=1011, BW=4047KiB/s (4145kB/s)(4096KiB/1012msec); 0 zone resets 00:40:31.630 slat (usec): min=9, max=27869, avg=37.94, stdev=870.59 00:40:31.630 clat (usec): min=126, max=373, avg=165.69, stdev=41.80 00:40:31.630 lat (usec): min=136, max=28242, avg=203.63, stdev=878.06 00:40:31.630 clat percentiles (usec): 00:40:31.630 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:40:31.630 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 147], 00:40:31.630 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 247], 95.00th=[ 249], 00:40:31.630 | 99.00th=[ 253], 99.50th=[ 255], 99.90th=[ 326], 99.95th=[ 375], 00:40:31.630 | 99.99th=[ 375] 00:40:31.630 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:40:31.630 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:40:31.630 lat (usec) : 250=84.76%, 500=14.42% 00:40:31.630 lat (msec) : 50=0.82% 00:40:31.630 cpu : usr=0.99%, sys=1.68%, ctx=1835, majf=0, minf=1 00:40:31.630 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:31.630 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.630 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:31.630 issued rwts: total=807,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:31.630 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:31.630 00:40:31.630 Run status group 0 (all jobs): 00:40:31.630 READ: bw=3190KiB/s (3266kB/s), 3190KiB/s-3190KiB/s (3266kB/s-3266kB/s), io=3228KiB (3305kB), run=1012-1012msec 00:40:31.630 WRITE: bw=4047KiB/s (4145kB/s), 4047KiB/s-4047KiB/s (4145kB/s-4145kB/s), io=4096KiB (4194kB), run=1012-1012msec 00:40:31.630 00:40:31.630 Disk stats (read/write): 00:40:31.630 nvme0n1: ios=712/1024, merge=0/0, ticks=1664/154, in_queue=1818, util=98.50% 00:40:31.630 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:31.889 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:31.889 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:31.889 rmmod nvme_tcp 00:40:31.889 rmmod nvme_fabrics 00:40:31.889 rmmod nvme_keyring 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 1270415 ']' 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1270415 ']' 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1270415' 00:40:32.148 killing process with pid 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1270415 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:32.148 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:32.407 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:32.407 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:32.407 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:32.407 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:32.407 06:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:34.312 00:40:34.312 real 0m13.151s 00:40:34.312 user 0m24.545s 00:40:34.312 sys 0m6.014s 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:34.312 ************************************ 00:40:34.312 END TEST nvmf_nmic 00:40:34.312 ************************************ 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:34.312 ************************************ 00:40:34.312 START TEST nvmf_fio_target 00:40:34.312 ************************************ 00:40:34.312 06:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:34.572 * Looking for test storage... 00:40:34.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.572 --rc genhtml_branch_coverage=1 00:40:34.572 --rc genhtml_function_coverage=1 00:40:34.572 --rc genhtml_legend=1 00:40:34.572 --rc geninfo_all_blocks=1 00:40:34.572 --rc geninfo_unexecuted_blocks=1 00:40:34.572 00:40:34.572 ' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.572 --rc genhtml_branch_coverage=1 00:40:34.572 --rc genhtml_function_coverage=1 00:40:34.572 --rc genhtml_legend=1 00:40:34.572 --rc geninfo_all_blocks=1 00:40:34.572 --rc geninfo_unexecuted_blocks=1 00:40:34.572 00:40:34.572 ' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.572 --rc genhtml_branch_coverage=1 00:40:34.572 --rc genhtml_function_coverage=1 00:40:34.572 --rc genhtml_legend=1 00:40:34.572 --rc geninfo_all_blocks=1 00:40:34.572 --rc geninfo_unexecuted_blocks=1 00:40:34.572 00:40:34.572 ' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:34.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:34.572 --rc genhtml_branch_coverage=1 00:40:34.572 --rc genhtml_function_coverage=1 00:40:34.572 --rc genhtml_legend=1 00:40:34.572 --rc geninfo_all_blocks=1 00:40:34.572 --rc geninfo_unexecuted_blocks=1 00:40:34.572 00:40:34.572 ' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:34.572 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:34.573 06:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:41.141 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:41.142 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:41.142 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:41.142 Found net devices under 0000:af:00.0: cvl_0_0 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:41.142 Found net devices under 0000:af:00.1: cvl_0_1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:41.142 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:41.142 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.317 ms 00:40:41.142 00:40:41.142 --- 10.0.0.2 ping statistics --- 00:40:41.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.142 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:41.142 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:41.142 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:40:41.142 00:40:41.142 --- 10.0.0.1 ping statistics --- 00:40:41.142 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:41.142 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=1274737 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 1274737 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1274737 ']' 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:41.142 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:41.142 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:41.143 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:41.143 06:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:41.143 [2024-12-13 06:46:32.010949] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:41.143 [2024-12-13 06:46:32.011841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:40:41.143 [2024-12-13 06:46:32.011874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:41.143 [2024-12-13 06:46:32.091708] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:41.143 [2024-12-13 06:46:32.114465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:41.143 [2024-12-13 06:46:32.114501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:41.143 [2024-12-13 06:46:32.114509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:41.143 [2024-12-13 06:46:32.114514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:41.143 [2024-12-13 06:46:32.114519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:41.143 [2024-12-13 06:46:32.115811] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:41.143 [2024-12-13 06:46:32.115920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:41.143 [2024-12-13 06:46:32.116023] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:41.143 [2024-12-13 06:46:32.116024] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:41.143 [2024-12-13 06:46:32.178542] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:41.143 [2024-12-13 06:46:32.179705] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:41.143 [2024-12-13 06:46:32.179760] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:41.143 [2024-12-13 06:46:32.180150] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:41.143 [2024-12-13 06:46:32.180188] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:40:41.143 [2024-12-13 06:46:32.416779] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:40:41.143 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:41.402 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:40:41.402 06:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:41.660 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:40:41.660 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:41.919 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:40:41.919 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:40:41.919 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:42.178 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:40:42.178 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:42.437 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:40:42.437 06:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:40:42.697 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:40:42.697 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:40:42.697 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:42.955 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:42.955 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:43.214 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:40:43.214 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:40:43.473 06:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:43.473 [2024-12-13 06:46:35.080686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:43.473 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:40:43.731 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:40:43.989 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:40:44.248 06:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:40:46.152 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:46.152 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:46.152 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:46.152 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:40:46.411 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:46.411 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:40:46.411 06:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:46.411 [global] 00:40:46.411 thread=1 00:40:46.411 invalidate=1 00:40:46.411 rw=write 00:40:46.411 time_based=1 00:40:46.411 runtime=1 00:40:46.411 ioengine=libaio 00:40:46.411 direct=1 00:40:46.411 bs=4096 00:40:46.411 iodepth=1 00:40:46.411 norandommap=0 00:40:46.411 numjobs=1 00:40:46.411 00:40:46.411 verify_dump=1 00:40:46.411 verify_backlog=512 00:40:46.411 verify_state_save=0 00:40:46.411 do_verify=1 00:40:46.411 verify=crc32c-intel 00:40:46.411 [job0] 00:40:46.411 filename=/dev/nvme0n1 00:40:46.411 [job1] 00:40:46.411 filename=/dev/nvme0n2 00:40:46.411 [job2] 00:40:46.411 filename=/dev/nvme0n3 00:40:46.411 [job3] 00:40:46.411 filename=/dev/nvme0n4 00:40:46.411 Could not set queue depth (nvme0n1) 00:40:46.411 Could not set queue depth (nvme0n2) 00:40:46.411 Could not set queue depth (nvme0n3) 00:40:46.411 Could not set queue depth (nvme0n4) 00:40:46.670 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:46.670 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:46.670 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:46.670 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:46.670 fio-3.35 00:40:46.670 Starting 4 threads 00:40:48.067 00:40:48.067 job0: (groupid=0, jobs=1): err= 0: pid=1275827: Fri Dec 13 06:46:39 2024 00:40:48.067 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:40:48.067 slat (nsec): min=6883, max=9739, avg=8823.00, stdev=718.06 00:40:48.067 clat (usec): min=247, max=41077, avg=39207.15, stdev=8493.22 00:40:48.067 lat (usec): min=256, max=41086, avg=39215.97, stdev=8493.23 00:40:48.067 clat percentiles (usec): 00:40:48.067 | 1.00th=[ 247], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:48.067 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:48.067 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:48.067 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:48.067 | 99.99th=[41157] 00:40:48.067 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:40:48.067 slat (nsec): min=6044, max=37558, avg=7627.22, stdev=1586.04 00:40:48.067 clat (usec): min=138, max=353, avg=187.01, stdev=19.91 00:40:48.067 lat (usec): min=146, max=363, avg=194.64, stdev=20.32 00:40:48.067 clat percentiles (usec): 00:40:48.067 | 1.00th=[ 147], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 174], 00:40:48.067 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:40:48.067 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 215], 00:40:48.067 | 99.00th=[ 235], 99.50th=[ 318], 99.90th=[ 355], 99.95th=[ 355], 00:40:48.067 | 99.99th=[ 355] 00:40:48.067 bw ( KiB/s): min= 4096, max= 4096, per=25.33%, avg=4096.00, stdev= 0.00, samples=1 00:40:48.067 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:48.067 lat (usec) : 250=95.14%, 500=0.75% 00:40:48.067 lat (msec) : 50=4.11% 00:40:48.067 cpu : usr=0.30%, sys=0.10%, ctx=537, majf=0, minf=1 00:40:48.067 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.067 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.067 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:48.067 job1: (groupid=0, jobs=1): err= 0: pid=1275828: Fri Dec 13 06:46:39 2024 00:40:48.067 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:40:48.067 slat (nsec): min=10696, max=25677, avg=23035.00, stdev=2904.11 00:40:48.067 clat (usec): min=40852, max=41234, avg=40978.14, stdev=80.28 00:40:48.067 lat (usec): min=40878, max=41245, avg=41001.17, stdev=78.10 00:40:48.067 clat percentiles (usec): 00:40:48.067 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:40:48.067 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:48.067 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:48.068 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:48.068 | 99.99th=[41157] 00:40:48.068 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:40:48.068 slat (nsec): min=10957, max=38674, avg=13001.55, stdev=2343.75 00:40:48.068 clat (usec): min=146, max=377, avg=199.61, stdev=26.99 00:40:48.068 lat (usec): min=158, max=394, avg=212.61, stdev=27.12 00:40:48.068 clat percentiles (usec): 00:40:48.068 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 176], 00:40:48.068 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 190], 60.00th=[ 212], 00:40:48.068 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 239], 00:40:48.068 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 379], 99.95th=[ 379], 00:40:48.068 | 99.99th=[ 379] 00:40:48.068 bw ( KiB/s): min= 4096, max= 4096, per=25.33%, avg=4096.00, stdev= 0.00, samples=1 00:40:48.068 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:48.068 lat (usec) : 250=94.19%, 500=1.69% 00:40:48.068 lat (msec) : 50=4.12% 00:40:48.068 cpu : usr=0.89%, sys=0.59%, ctx=535, majf=0, minf=1 00:40:48.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:48.068 job2: (groupid=0, jobs=1): err= 0: pid=1275829: Fri Dec 13 06:46:39 2024 00:40:48.068 read: IOPS=2360, BW=9443KiB/s (9669kB/s)(9452KiB/1001msec) 00:40:48.068 slat (nsec): min=7226, max=39449, avg=8187.93, stdev=1237.79 00:40:48.068 clat (usec): min=186, max=40826, avg=227.25, stdev=835.71 00:40:48.068 lat (usec): min=194, max=40835, avg=235.44, stdev=835.72 00:40:48.068 clat percentiles (usec): 00:40:48.068 | 1.00th=[ 192], 5.00th=[ 194], 10.00th=[ 196], 20.00th=[ 198], 00:40:48.068 | 30.00th=[ 202], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:40:48.068 | 70.00th=[ 212], 80.00th=[ 217], 90.00th=[ 227], 95.00th=[ 249], 00:40:48.068 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 302], 99.95th=[ 322], 00:40:48.068 | 99.99th=[40633] 00:40:48.068 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:48.068 slat (nsec): min=10865, max=43641, avg=12237.40, stdev=1934.26 00:40:48.068 clat (usec): min=125, max=300, avg=155.49, stdev=28.05 00:40:48.068 lat (usec): min=143, max=315, avg=167.73, stdev=28.74 00:40:48.068 clat percentiles (usec): 00:40:48.068 | 1.00th=[ 137], 5.00th=[ 139], 10.00th=[ 139], 20.00th=[ 141], 00:40:48.068 | 30.00th=[ 141], 40.00th=[ 143], 50.00th=[ 143], 60.00th=[ 145], 00:40:48.068 | 70.00th=[ 149], 80.00th=[ 174], 90.00th=[ 202], 95.00th=[ 227], 00:40:48.068 | 99.00th=[ 245], 99.50th=[ 253], 99.90th=[ 281], 99.95th=[ 285], 00:40:48.068 | 99.99th=[ 302] 00:40:48.068 bw ( KiB/s): min= 9800, max= 9800, per=60.59%, avg=9800.00, stdev= 0.00, samples=1 00:40:48.068 iops : min= 2450, max= 2450, avg=2450.00, stdev= 0.00, samples=1 00:40:48.068 lat (usec) : 250=97.56%, 500=2.42% 00:40:48.068 lat (msec) : 50=0.02% 00:40:48.068 cpu : usr=4.50%, sys=7.50%, ctx=4924, majf=0, minf=1 00:40:48.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 issued rwts: total=2363,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:48.068 job3: (groupid=0, jobs=1): err= 0: pid=1275830: Fri Dec 13 06:46:39 2024 00:40:48.068 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:40:48.068 slat (nsec): min=10031, max=25044, avg=21531.26, stdev=3686.70 00:40:48.068 clat (usec): min=250, max=41157, avg=39184.83, stdev=8487.71 00:40:48.068 lat (usec): min=273, max=41178, avg=39206.36, stdev=8487.54 00:40:48.068 clat percentiles (usec): 00:40:48.068 | 1.00th=[ 251], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:48.068 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:48.068 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:48.068 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:40:48.068 | 99.99th=[41157] 00:40:48.068 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:40:48.068 slat (nsec): min=10487, max=41735, avg=12115.41, stdev=2131.46 00:40:48.068 clat (usec): min=147, max=373, avg=182.02, stdev=18.22 00:40:48.068 lat (usec): min=159, max=386, avg=194.13, stdev=18.88 00:40:48.068 clat percentiles (usec): 00:40:48.068 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 172], 00:40:48.068 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:40:48.068 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 198], 95.00th=[ 206], 00:40:48.068 | 99.00th=[ 229], 99.50th=[ 310], 99.90th=[ 375], 99.95th=[ 375], 00:40:48.068 | 99.99th=[ 375] 00:40:48.068 bw ( KiB/s): min= 4096, max= 4096, per=25.33%, avg=4096.00, stdev= 0.00, samples=1 00:40:48.068 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:48.068 lat (usec) : 250=95.14%, 500=0.75% 00:40:48.068 lat (msec) : 50=4.11% 00:40:48.068 cpu : usr=0.50%, sys=0.90%, ctx=535, majf=0, minf=2 00:40:48.068 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:48.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:48.068 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:48.068 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:48.068 00:40:48.068 Run status group 0 (all jobs): 00:40:48.068 READ: bw=9599KiB/s (9830kB/s), 86.9KiB/s-9443KiB/s (89.0kB/s-9669kB/s), io=9724KiB (9957kB), run=1001-1013msec 00:40:48.068 WRITE: bw=15.8MiB/s (16.6MB/s), 2022KiB/s-9.99MiB/s (2070kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1013msec 00:40:48.068 00:40:48.068 Disk stats (read/write): 00:40:48.068 nvme0n1: ios=70/512, merge=0/0, ticks=1323/95, in_queue=1418, util=97.70% 00:40:48.068 nvme0n2: ios=68/512, merge=0/0, ticks=1035/96, in_queue=1131, util=98.15% 00:40:48.068 nvme0n3: ios=1893/2048, merge=0/0, ticks=1354/315, in_queue=1669, util=98.05% 00:40:48.068 nvme0n4: ios=18/512, merge=0/0, ticks=697/90, in_queue=787, util=89.22% 00:40:48.068 06:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:40:48.068 [global] 00:40:48.068 thread=1 00:40:48.068 invalidate=1 00:40:48.068 rw=randwrite 00:40:48.068 time_based=1 00:40:48.068 runtime=1 00:40:48.068 ioengine=libaio 00:40:48.068 direct=1 00:40:48.068 bs=4096 00:40:48.068 iodepth=1 00:40:48.068 norandommap=0 00:40:48.068 numjobs=1 00:40:48.068 00:40:48.068 verify_dump=1 00:40:48.068 verify_backlog=512 00:40:48.068 verify_state_save=0 00:40:48.068 do_verify=1 00:40:48.068 verify=crc32c-intel 00:40:48.068 [job0] 00:40:48.068 filename=/dev/nvme0n1 00:40:48.068 [job1] 00:40:48.068 filename=/dev/nvme0n2 00:40:48.068 [job2] 00:40:48.068 filename=/dev/nvme0n3 00:40:48.068 [job3] 00:40:48.068 filename=/dev/nvme0n4 00:40:48.068 Could not set queue depth (nvme0n1) 00:40:48.068 Could not set queue depth (nvme0n2) 00:40:48.068 Could not set queue depth (nvme0n3) 00:40:48.068 Could not set queue depth (nvme0n4) 00:40:48.325 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:48.325 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:48.325 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:48.325 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:48.325 fio-3.35 00:40:48.325 Starting 4 threads 00:40:49.695 00:40:49.695 job0: (groupid=0, jobs=1): err= 0: pid=1276198: Fri Dec 13 06:46:40 2024 00:40:49.695 read: IOPS=23, BW=95.0KiB/s (97.2kB/s)(96.0KiB/1011msec) 00:40:49.695 slat (nsec): min=7382, max=26068, avg=21111.04, stdev=5060.21 00:40:49.695 clat (usec): min=253, max=44632, avg=37718.44, stdev=11533.34 00:40:49.695 lat (usec): min=262, max=44655, avg=37739.55, stdev=11534.39 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 253], 5.00th=[ 453], 10.00th=[40633], 20.00th=[40633], 00:40:49.695 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:49.695 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:49.695 | 99.00th=[44827], 99.50th=[44827], 99.90th=[44827], 99.95th=[44827], 00:40:49.695 | 99.99th=[44827] 00:40:49.695 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:40:49.695 slat (nsec): min=9048, max=61303, avg=10529.24, stdev=2663.85 00:40:49.695 clat (usec): min=141, max=282, avg=192.72, stdev=21.67 00:40:49.695 lat (usec): min=151, max=343, avg=203.25, stdev=22.21 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 151], 5.00th=[ 155], 10.00th=[ 163], 20.00th=[ 176], 00:40:49.695 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:40:49.695 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 212], 95.00th=[ 227], 00:40:49.695 | 99.00th=[ 265], 99.50th=[ 277], 99.90th=[ 285], 99.95th=[ 285], 00:40:49.695 | 99.99th=[ 285] 00:40:49.695 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:40:49.695 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:49.695 lat (usec) : 250=93.28%, 500=2.61% 00:40:49.695 lat (msec) : 50=4.10% 00:40:49.695 cpu : usr=0.10%, sys=0.69%, ctx=537, majf=0, minf=1 00:40:49.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:49.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:49.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:49.695 job1: (groupid=0, jobs=1): err= 0: pid=1276199: Fri Dec 13 06:46:40 2024 00:40:49.695 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:40:49.695 slat (nsec): min=9352, max=24114, avg=22622.95, stdev=3039.86 00:40:49.695 clat (usec): min=40517, max=43982, avg=41086.55, stdev=657.49 00:40:49.695 lat (usec): min=40527, max=44006, avg=41109.18, stdev=658.19 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:49.695 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:49.695 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:49.695 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:40:49.695 | 99.99th=[43779] 00:40:49.695 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:40:49.695 slat (nsec): min=9466, max=40589, avg=10518.75, stdev=2015.27 00:40:49.695 clat (usec): min=133, max=286, avg=192.58, stdev=21.15 00:40:49.695 lat (usec): min=152, max=314, avg=203.10, stdev=21.33 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 167], 20.00th=[ 176], 00:40:49.695 | 30.00th=[ 184], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:40:49.695 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 217], 95.00th=[ 229], 00:40:49.695 | 99.00th=[ 245], 99.50th=[ 258], 99.90th=[ 285], 99.95th=[ 285], 00:40:49.695 | 99.99th=[ 285] 00:40:49.695 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:40:49.695 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:49.695 lat (usec) : 250=95.13%, 500=0.75% 00:40:49.695 lat (msec) : 50=4.12% 00:40:49.695 cpu : usr=0.20%, sys=0.59%, ctx=537, majf=0, minf=1 00:40:49.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:49.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:49.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:49.695 job2: (groupid=0, jobs=1): err= 0: pid=1276200: Fri Dec 13 06:46:40 2024 00:40:49.695 read: IOPS=24, BW=96.1KiB/s (98.4kB/s)(100KiB/1041msec) 00:40:49.695 slat (nsec): min=9855, max=25186, avg=21741.16, stdev=4530.16 00:40:49.695 clat (usec): min=240, max=41960, avg=37757.01, stdev=11294.00 00:40:49.695 lat (usec): min=262, max=41984, avg=37778.75, stdev=11293.89 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 241], 5.00th=[ 247], 10.00th=[40633], 20.00th=[40633], 00:40:49.695 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:49.695 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:40:49.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:49.695 | 99.99th=[42206] 00:40:49.695 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:40:49.695 slat (nsec): min=9868, max=41275, avg=11093.52, stdev=2329.15 00:40:49.695 clat (usec): min=155, max=284, avg=174.48, stdev=12.37 00:40:49.695 lat (usec): min=165, max=325, avg=185.58, stdev=13.48 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 167], 00:40:49.695 | 30.00th=[ 169], 40.00th=[ 169], 50.00th=[ 174], 60.00th=[ 174], 00:40:49.695 | 70.00th=[ 178], 80.00th=[ 180], 90.00th=[ 188], 95.00th=[ 196], 00:40:49.695 | 99.00th=[ 223], 99.50th=[ 239], 99.90th=[ 285], 99.95th=[ 285], 00:40:49.695 | 99.99th=[ 285] 00:40:49.695 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:40:49.695 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:49.695 lat (usec) : 250=95.34%, 500=0.37% 00:40:49.695 lat (msec) : 50=4.28% 00:40:49.695 cpu : usr=0.29%, sys=1.06%, ctx=537, majf=0, minf=1 00:40:49.695 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:49.695 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.695 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:49.695 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:49.695 job3: (groupid=0, jobs=1): err= 0: pid=1276201: Fri Dec 13 06:46:40 2024 00:40:49.695 read: IOPS=23, BW=92.2KiB/s (94.4kB/s)(96.0KiB/1041msec) 00:40:49.695 slat (nsec): min=9311, max=31382, avg=22655.08, stdev=4444.54 00:40:49.695 clat (usec): min=370, max=41979, avg=39290.85, stdev=8293.47 00:40:49.695 lat (usec): min=397, max=42002, avg=39313.51, stdev=8292.63 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 371], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:40:49.695 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:40:49.695 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:49.695 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:49.695 | 99.99th=[42206] 00:40:49.695 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:40:49.695 slat (nsec): min=9293, max=37721, avg=10285.86, stdev=1497.75 00:40:49.695 clat (usec): min=155, max=364, avg=174.85, stdev=14.20 00:40:49.695 lat (usec): min=164, max=402, avg=185.14, stdev=15.00 00:40:49.695 clat percentiles (usec): 00:40:49.695 | 1.00th=[ 159], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 167], 00:40:49.695 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:40:49.695 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 192], 00:40:49.695 | 99.00th=[ 212], 99.50th=[ 277], 99.90th=[ 363], 99.95th=[ 363], 00:40:49.695 | 99.99th=[ 363] 00:40:49.696 bw ( KiB/s): min= 4096, max= 4096, per=52.05%, avg=4096.00, stdev= 0.00, samples=1 00:40:49.696 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:40:49.696 lat (usec) : 250=94.96%, 500=0.75% 00:40:49.696 lat (msec) : 50=4.29% 00:40:49.696 cpu : usr=0.19%, sys=0.58%, ctx=537, majf=0, minf=1 00:40:49.696 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:49.696 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.696 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:49.696 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:49.696 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:49.696 00:40:49.696 Run status group 0 (all jobs): 00:40:49.696 READ: bw=365KiB/s (374kB/s), 87.0KiB/s-96.1KiB/s (89.1kB/s-98.4kB/s), io=380KiB (389kB), run=1011-1041msec 00:40:49.696 WRITE: bw=7869KiB/s (8058kB/s), 1967KiB/s-2026KiB/s (2015kB/s-2074kB/s), io=8192KiB (8389kB), run=1011-1041msec 00:40:49.696 00:40:49.696 Disk stats (read/write): 00:40:49.696 nvme0n1: ios=70/512, merge=0/0, ticks=927/97, in_queue=1024, util=91.18% 00:40:49.696 nvme0n2: ios=57/512, merge=0/0, ticks=1741/95, in_queue=1836, util=96.24% 00:40:49.696 nvme0n3: ios=77/512, merge=0/0, ticks=813/85, in_queue=898, util=90.96% 00:40:49.696 nvme0n4: ios=43/512, merge=0/0, ticks=1685/86, in_queue=1771, util=98.22% 00:40:49.696 06:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:40:49.696 [global] 00:40:49.696 thread=1 00:40:49.696 invalidate=1 00:40:49.696 rw=write 00:40:49.696 time_based=1 00:40:49.696 runtime=1 00:40:49.696 ioengine=libaio 00:40:49.696 direct=1 00:40:49.696 bs=4096 00:40:49.696 iodepth=128 00:40:49.696 norandommap=0 00:40:49.696 numjobs=1 00:40:49.696 00:40:49.696 verify_dump=1 00:40:49.696 verify_backlog=512 00:40:49.696 verify_state_save=0 00:40:49.696 do_verify=1 00:40:49.696 verify=crc32c-intel 00:40:49.696 [job0] 00:40:49.696 filename=/dev/nvme0n1 00:40:49.696 [job1] 00:40:49.696 filename=/dev/nvme0n2 00:40:49.696 [job2] 00:40:49.696 filename=/dev/nvme0n3 00:40:49.696 [job3] 00:40:49.696 filename=/dev/nvme0n4 00:40:49.696 Could not set queue depth (nvme0n1) 00:40:49.696 Could not set queue depth (nvme0n2) 00:40:49.696 Could not set queue depth (nvme0n3) 00:40:49.696 Could not set queue depth (nvme0n4) 00:40:49.696 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:49.696 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:49.696 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:49.696 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:49.696 fio-3.35 00:40:49.696 Starting 4 threads 00:40:51.066 00:40:51.066 job0: (groupid=0, jobs=1): err= 0: pid=1276559: Fri Dec 13 06:46:42 2024 00:40:51.066 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:40:51.066 slat (nsec): min=1699, max=11402k, avg=118337.72, stdev=768652.29 00:40:51.066 clat (usec): min=7222, max=34047, avg=15436.00, stdev=5670.51 00:40:51.066 lat (usec): min=7229, max=34061, avg=15554.34, stdev=5732.84 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:40:51.066 | 30.00th=[10683], 40.00th=[12256], 50.00th=[13698], 60.00th=[16581], 00:40:51.066 | 70.00th=[18220], 80.00th=[20055], 90.00th=[23987], 95.00th=[27132], 00:40:51.066 | 99.00th=[28705], 99.50th=[32637], 99.90th=[33424], 99.95th=[33424], 00:40:51.066 | 99.99th=[33817] 00:40:51.066 write: IOPS=3611, BW=14.1MiB/s (14.8MB/s)(14.2MiB/1005msec); 0 zone resets 00:40:51.066 slat (usec): min=2, max=11031, avg=152.53, stdev=746.43 00:40:51.066 clat (usec): min=3159, max=43449, avg=19815.20, stdev=7755.75 00:40:51.066 lat (usec): min=4689, max=43460, avg=19967.73, stdev=7807.18 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 7963], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[14484], 00:40:51.066 | 30.00th=[16057], 40.00th=[16909], 50.00th=[17695], 60.00th=[19792], 00:40:51.066 | 70.00th=[21103], 80.00th=[23725], 90.00th=[33162], 95.00th=[36439], 00:40:51.066 | 99.00th=[41157], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:40:51.066 | 99.99th=[43254] 00:40:51.066 bw ( KiB/s): min=12288, max=16384, per=21.38%, avg=14336.00, stdev=2896.31, samples=2 00:40:51.066 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:40:51.066 lat (msec) : 4=0.01%, 10=13.92%, 20=56.65%, 50=29.42% 00:40:51.066 cpu : usr=3.29%, sys=4.48%, ctx=389, majf=0, minf=1 00:40:51.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:40:51.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:51.066 issued rwts: total=3584,3630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:51.066 job1: (groupid=0, jobs=1): err= 0: pid=1276560: Fri Dec 13 06:46:42 2024 00:40:51.066 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:40:51.066 slat (nsec): min=1802, max=12291k, avg=97209.29, stdev=753217.25 00:40:51.066 clat (usec): min=2382, max=44240, avg=12577.63, stdev=5420.81 00:40:51.066 lat (usec): min=2407, max=44249, avg=12674.84, stdev=5488.26 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 6259], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 8717], 00:40:51.066 | 30.00th=[ 9503], 40.00th=[10028], 50.00th=[11207], 60.00th=[12256], 00:40:51.066 | 70.00th=[12649], 80.00th=[14877], 90.00th=[21103], 95.00th=[26084], 00:40:51.066 | 99.00th=[29230], 99.50th=[35390], 99.90th=[44303], 99.95th=[44303], 00:40:51.066 | 99.99th=[44303] 00:40:51.066 write: IOPS=4831, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1008msec); 0 zone resets 00:40:51.066 slat (usec): min=2, max=13457, avg=104.04, stdev=766.56 00:40:51.066 clat (usec): min=1818, max=45928, avg=14349.74, stdev=9260.75 00:40:51.066 lat (usec): min=1833, max=45951, avg=14453.78, stdev=9339.66 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 5276], 5.00th=[ 6587], 10.00th=[ 7439], 20.00th=[ 8455], 00:40:51.066 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[12518], 00:40:51.066 | 70.00th=[14222], 80.00th=[17433], 90.00th=[23462], 95.00th=[38536], 00:40:51.066 | 99.00th=[44303], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:40:51.066 | 99.99th=[45876] 00:40:51.066 bw ( KiB/s): min=18296, max=19648, per=28.29%, avg=18972.00, stdev=956.01, samples=2 00:40:51.066 iops : min= 4574, max= 4912, avg=4743.00, stdev=239.00, samples=2 00:40:51.066 lat (msec) : 2=0.02%, 4=0.45%, 10=40.07%, 20=44.87%, 50=14.58% 00:40:51.066 cpu : usr=3.97%, sys=6.45%, ctx=299, majf=0, minf=1 00:40:51.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:40:51.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:51.066 issued rwts: total=4608,4870,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:51.066 job2: (groupid=0, jobs=1): err= 0: pid=1276561: Fri Dec 13 06:46:42 2024 00:40:51.066 read: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec) 00:40:51.066 slat (usec): min=2, max=14177, avg=143.31, stdev=924.91 00:40:51.066 clat (usec): min=7538, max=56792, avg=17096.32, stdev=8064.72 00:40:51.066 lat (usec): min=7545, max=56804, avg=17239.63, stdev=8149.92 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10552], 20.00th=[10945], 00:40:51.066 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12518], 60.00th=[14746], 00:40:51.066 | 70.00th=[22676], 80.00th=[23987], 90.00th=[27395], 95.00th=[30278], 00:40:51.066 | 99.00th=[45876], 99.50th=[51119], 99.90th=[56886], 99.95th=[56886], 00:40:51.066 | 99.99th=[56886] 00:40:51.066 write: IOPS=3260, BW=12.7MiB/s (13.4MB/s)(12.8MiB/1005msec); 0 zone resets 00:40:51.066 slat (usec): min=2, max=15734, avg=159.60, stdev=860.84 00:40:51.066 clat (usec): min=3490, max=64562, avg=22872.86, stdev=9554.36 00:40:51.066 lat (usec): min=9503, max=64574, avg=23032.46, stdev=9608.74 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[10421], 5.00th=[12649], 10.00th=[15270], 20.00th=[16712], 00:40:51.066 | 30.00th=[17171], 40.00th=[17695], 50.00th=[19792], 60.00th=[21103], 00:40:51.066 | 70.00th=[24249], 80.00th=[29492], 90.00th=[35914], 95.00th=[45351], 00:40:51.066 | 99.00th=[56886], 99.50th=[58983], 99.90th=[64750], 99.95th=[64750], 00:40:51.066 | 99.99th=[64750] 00:40:51.066 bw ( KiB/s): min= 9592, max=15608, per=18.79%, avg=12600.00, stdev=4253.95, samples=2 00:40:51.066 iops : min= 2398, max= 3902, avg=3150.00, stdev=1063.49, samples=2 00:40:51.066 lat (msec) : 4=0.02%, 10=3.12%, 20=53.90%, 50=41.33%, 100=1.64% 00:40:51.066 cpu : usr=3.49%, sys=3.98%, ctx=363, majf=0, minf=1 00:40:51.066 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:40:51.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.066 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:51.066 issued rwts: total=3072,3277,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.066 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:51.066 job3: (groupid=0, jobs=1): err= 0: pid=1276562: Fri Dec 13 06:46:42 2024 00:40:51.066 read: IOPS=4684, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1007msec) 00:40:51.066 slat (nsec): min=1353, max=15578k, avg=99856.59, stdev=830067.26 00:40:51.066 clat (usec): min=3290, max=46160, avg=13462.45, stdev=7438.38 00:40:51.066 lat (usec): min=5005, max=46196, avg=13562.30, stdev=7509.57 00:40:51.066 clat percentiles (usec): 00:40:51.066 | 1.00th=[ 7242], 5.00th=[ 7701], 10.00th=[ 8160], 20.00th=[ 9896], 00:40:51.066 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11076], 60.00th=[11469], 00:40:51.066 | 70.00th=[11863], 80.00th=[13435], 90.00th=[25035], 95.00th=[31851], 00:40:51.066 | 99.00th=[41157], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:40:51.066 | 99.99th=[46400] 00:40:51.066 write: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec); 0 zone resets 00:40:51.066 slat (usec): min=2, max=18392, avg=95.09, stdev=778.72 00:40:51.066 clat (usec): min=1137, max=46241, avg=12222.67, stdev=6866.18 00:40:51.066 lat (usec): min=1150, max=46274, avg=12317.77, stdev=6925.98 00:40:51.066 clat percentiles (usec): 00:40:51.067 | 1.00th=[ 4228], 5.00th=[ 5538], 10.00th=[ 6390], 20.00th=[ 8586], 00:40:51.067 | 30.00th=[ 9110], 40.00th=[ 9372], 50.00th=[10421], 60.00th=[10814], 00:40:51.067 | 70.00th=[11207], 80.00th=[14353], 90.00th=[22414], 95.00th=[29754], 00:40:51.067 | 99.00th=[38536], 99.50th=[38536], 99.90th=[40109], 99.95th=[41157], 00:40:51.067 | 99.99th=[46400] 00:40:51.067 bw ( KiB/s): min=16384, max=24424, per=30.43%, avg=20404.00, stdev=5685.14, samples=2 00:40:51.067 iops : min= 4096, max= 6106, avg=5101.00, stdev=1421.28, samples=2 00:40:51.067 lat (msec) : 2=0.14%, 4=0.38%, 10=34.86%, 20=51.96%, 50=12.67% 00:40:51.067 cpu : usr=4.08%, sys=7.36%, ctx=254, majf=0, minf=1 00:40:51.067 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:40:51.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:51.067 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:51.067 issued rwts: total=4717,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:51.067 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:51.067 00:40:51.067 Run status group 0 (all jobs): 00:40:51.067 READ: bw=61.9MiB/s (64.9MB/s), 11.9MiB/s-18.3MiB/s (12.5MB/s-19.2MB/s), io=62.4MiB (65.5MB), run=1005-1008msec 00:40:51.067 WRITE: bw=65.5MiB/s (68.7MB/s), 12.7MiB/s-19.9MiB/s (13.4MB/s-20.8MB/s), io=66.0MiB (69.2MB), run=1005-1008msec 00:40:51.067 00:40:51.067 Disk stats (read/write): 00:40:51.067 nvme0n1: ios=3070/3072, merge=0/0, ticks=21542/30725, in_queue=52267, util=93.08% 00:40:51.067 nvme0n2: ios=3605/3682, merge=0/0, ticks=41431/51066, in_queue=92497, util=99.69% 00:40:51.067 nvme0n3: ios=2603/2874, merge=0/0, ticks=21803/32888, in_queue=54691, util=95.89% 00:40:51.067 nvme0n4: ios=3888/4096, merge=0/0, ticks=37990/35893, in_queue=73883, util=97.14% 00:40:51.067 06:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:40:51.067 [global] 00:40:51.067 thread=1 00:40:51.067 invalidate=1 00:40:51.067 rw=randwrite 00:40:51.067 time_based=1 00:40:51.067 runtime=1 00:40:51.067 ioengine=libaio 00:40:51.067 direct=1 00:40:51.067 bs=4096 00:40:51.067 iodepth=128 00:40:51.067 norandommap=0 00:40:51.067 numjobs=1 00:40:51.067 00:40:51.067 verify_dump=1 00:40:51.067 verify_backlog=512 00:40:51.067 verify_state_save=0 00:40:51.067 do_verify=1 00:40:51.067 verify=crc32c-intel 00:40:51.067 [job0] 00:40:51.067 filename=/dev/nvme0n1 00:40:51.067 [job1] 00:40:51.067 filename=/dev/nvme0n2 00:40:51.067 [job2] 00:40:51.067 filename=/dev/nvme0n3 00:40:51.067 [job3] 00:40:51.067 filename=/dev/nvme0n4 00:40:51.067 Could not set queue depth (nvme0n1) 00:40:51.067 Could not set queue depth (nvme0n2) 00:40:51.067 Could not set queue depth (nvme0n3) 00:40:51.067 Could not set queue depth (nvme0n4) 00:40:51.324 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:51.324 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:51.324 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:51.324 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:40:51.324 fio-3.35 00:40:51.324 Starting 4 threads 00:40:52.693 00:40:52.693 job0: (groupid=0, jobs=1): err= 0: pid=1276930: Fri Dec 13 06:46:44 2024 00:40:52.693 read: IOPS=3419, BW=13.4MiB/s (14.0MB/s)(13.4MiB/1006msec) 00:40:52.693 slat (nsec): min=1600, max=23726k, avg=129421.02, stdev=1073605.82 00:40:52.693 clat (usec): min=2272, max=45069, avg=17135.98, stdev=7279.02 00:40:52.693 lat (usec): min=5590, max=45097, avg=17265.40, stdev=7350.62 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 5669], 5.00th=[ 8717], 10.00th=[10290], 20.00th=[11731], 00:40:52.693 | 30.00th=[12780], 40.00th=[13173], 50.00th=[14746], 60.00th=[16188], 00:40:52.693 | 70.00th=[19268], 80.00th=[23200], 90.00th=[28443], 95.00th=[32375], 00:40:52.693 | 99.00th=[36439], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:40:52.693 | 99.99th=[44827] 00:40:52.693 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:40:52.693 slat (usec): min=2, max=16523, avg=147.64, stdev=1021.76 00:40:52.693 clat (usec): min=1404, max=76066, avg=19023.96, stdev=12559.48 00:40:52.693 lat (usec): min=1415, max=76078, avg=19171.60, stdev=12639.12 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 5997], 5.00th=[ 8094], 10.00th=[ 8717], 20.00th=[ 9503], 00:40:52.693 | 30.00th=[11863], 40.00th=[14222], 50.00th=[16581], 60.00th=[17695], 00:40:52.693 | 70.00th=[19268], 80.00th=[25297], 90.00th=[31589], 95.00th=[52167], 00:40:52.693 | 99.00th=[68682], 99.50th=[73925], 99.90th=[74974], 99.95th=[76022], 00:40:52.693 | 99.99th=[76022] 00:40:52.693 bw ( KiB/s): min=12904, max=15768, per=21.68%, avg=14336.00, stdev=2025.15, samples=2 00:40:52.693 iops : min= 3226, max= 3942, avg=3584.00, stdev=506.29, samples=2 00:40:52.693 lat (msec) : 2=0.07%, 4=0.01%, 10=16.16%, 20=57.23%, 50=23.82% 00:40:52.693 lat (msec) : 100=2.71% 00:40:52.693 cpu : usr=3.58%, sys=4.68%, ctx=230, majf=0, minf=1 00:40:52.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:40:52.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:52.693 issued rwts: total=3440,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:52.693 job1: (groupid=0, jobs=1): err= 0: pid=1276931: Fri Dec 13 06:46:44 2024 00:40:52.693 read: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec) 00:40:52.693 slat (nsec): min=1331, max=19379k, avg=73348.11, stdev=570704.88 00:40:52.693 clat (usec): min=4945, max=51450, avg=9707.78, stdev=5884.33 00:40:52.693 lat (usec): min=4957, max=51474, avg=9781.13, stdev=5932.32 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 5538], 5.00th=[ 6194], 10.00th=[ 6521], 20.00th=[ 7242], 00:40:52.693 | 30.00th=[ 7701], 40.00th=[ 7963], 50.00th=[ 8160], 60.00th=[ 8586], 00:40:52.693 | 70.00th=[ 8979], 80.00th=[ 9634], 90.00th=[11994], 95.00th=[24511], 00:40:52.693 | 99.00th=[40109], 99.50th=[40109], 99.90th=[40109], 99.95th=[40109], 00:40:52.693 | 99.99th=[51643] 00:40:52.693 write: IOPS=6206, BW=24.2MiB/s (25.4MB/s)(24.3MiB/1004msec); 0 zone resets 00:40:52.693 slat (usec): min=2, max=34939, avg=81.44, stdev=765.44 00:40:52.693 clat (usec): min=3240, max=49575, avg=10164.76, stdev=6624.03 00:40:52.693 lat (usec): min=3717, max=49606, avg=10246.19, stdev=6704.86 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 4686], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7570], 00:40:52.693 | 30.00th=[ 7767], 40.00th=[ 7898], 50.00th=[ 7963], 60.00th=[ 8029], 00:40:52.693 | 70.00th=[ 8356], 80.00th=[10028], 90.00th=[12125], 95.00th=[30016], 00:40:52.693 | 99.00th=[37487], 99.50th=[37487], 99.90th=[41157], 99.95th=[44303], 00:40:52.693 | 99.99th=[49546] 00:40:52.693 bw ( KiB/s): min=17272, max=31880, per=37.16%, avg=24576.00, stdev=10329.42, samples=2 00:40:52.693 iops : min= 4318, max= 7970, avg=6144.00, stdev=2582.35, samples=2 00:40:52.693 lat (msec) : 4=0.07%, 10=81.40%, 20=11.69%, 50=6.83%, 100=0.01% 00:40:52.693 cpu : usr=6.38%, sys=6.48%, ctx=518, majf=0, minf=1 00:40:52.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:40:52.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:52.693 issued rwts: total=6144,6231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:52.693 job2: (groupid=0, jobs=1): err= 0: pid=1276932: Fri Dec 13 06:46:44 2024 00:40:52.693 read: IOPS=2570, BW=10.0MiB/s (10.5MB/s)(10.1MiB/1008msec) 00:40:52.693 slat (nsec): min=1224, max=23735k, avg=179135.27, stdev=1363038.46 00:40:52.693 clat (usec): min=6540, max=63047, avg=22092.43, stdev=13630.30 00:40:52.693 lat (usec): min=7375, max=63053, avg=22271.57, stdev=13734.31 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 8586], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11863], 00:40:52.693 | 30.00th=[13042], 40.00th=[14615], 50.00th=[16188], 60.00th=[18220], 00:40:52.693 | 70.00th=[23987], 80.00th=[34866], 90.00th=[45876], 95.00th=[50594], 00:40:52.693 | 99.00th=[62653], 99.50th=[63177], 99.90th=[63177], 99.95th=[63177], 00:40:52.693 | 99.99th=[63177] 00:40:52.693 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:40:52.693 slat (nsec): min=1898, max=31138k, avg=163380.10, stdev=1364073.72 00:40:52.693 clat (usec): min=1027, max=71364, avg=23012.51, stdev=15714.85 00:40:52.693 lat (usec): min=1036, max=71395, avg=23175.89, stdev=15848.79 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 1713], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9503], 00:40:52.693 | 30.00th=[10683], 40.00th=[11731], 50.00th=[15795], 60.00th=[25560], 00:40:52.693 | 70.00th=[32375], 80.00th=[40633], 90.00th=[45351], 95.00th=[51119], 00:40:52.693 | 99.00th=[62653], 99.50th=[66323], 99.90th=[66323], 99.95th=[70779], 00:40:52.693 | 99.99th=[71828] 00:40:52.693 bw ( KiB/s): min= 7424, max=16384, per=18.00%, avg=11904.00, stdev=6335.68, samples=2 00:40:52.693 iops : min= 1856, max= 4096, avg=2976.00, stdev=1583.92, samples=2 00:40:52.693 lat (msec) : 2=0.74%, 4=0.78%, 10=15.20%, 20=42.59%, 50=35.55% 00:40:52.693 lat (msec) : 100=5.14% 00:40:52.693 cpu : usr=3.28%, sys=2.68%, ctx=230, majf=0, minf=2 00:40:52.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:40:52.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:52.693 issued rwts: total=2591,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:52.693 job3: (groupid=0, jobs=1): err= 0: pid=1276933: Fri Dec 13 06:46:44 2024 00:40:52.693 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:40:52.693 slat (nsec): min=1484, max=16823k, avg=109921.31, stdev=937962.58 00:40:52.693 clat (usec): min=2007, max=38399, avg=14926.39, stdev=6628.27 00:40:52.693 lat (usec): min=2016, max=38416, avg=15036.31, stdev=6697.65 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 4047], 5.00th=[ 6915], 10.00th=[ 9372], 20.00th=[ 9765], 00:40:52.693 | 30.00th=[10159], 40.00th=[10814], 50.00th=[12125], 60.00th=[15533], 00:40:52.693 | 70.00th=[17695], 80.00th=[21627], 90.00th=[25035], 95.00th=[28181], 00:40:52.693 | 99.00th=[33817], 99.50th=[34341], 99.90th=[38011], 99.95th=[38011], 00:40:52.693 | 99.99th=[38536] 00:40:52.693 write: IOPS=3800, BW=14.8MiB/s (15.6MB/s)(15.0MiB/1012msec); 0 zone resets 00:40:52.693 slat (usec): min=2, max=14020, avg=133.38, stdev=900.42 00:40:52.693 clat (usec): min=201, max=116687, avg=19365.97, stdev=19201.00 00:40:52.693 lat (usec): min=210, max=116700, avg=19499.35, stdev=19330.84 00:40:52.693 clat percentiles (usec): 00:40:52.693 | 1.00th=[ 258], 5.00th=[ 2245], 10.00th=[ 6915], 20.00th=[ 9110], 00:40:52.693 | 30.00th=[ 10421], 40.00th=[ 11469], 50.00th=[ 14615], 60.00th=[ 16188], 00:40:52.693 | 70.00th=[ 18482], 80.00th=[ 25297], 90.00th=[ 31589], 95.00th=[ 55313], 00:40:52.693 | 99.00th=[110625], 99.50th=[112722], 99.90th=[116917], 99.95th=[116917], 00:40:52.693 | 99.99th=[116917] 00:40:52.693 bw ( KiB/s): min= 9272, max=20480, per=22.49%, avg=14876.00, stdev=7925.25, samples=2 00:40:52.693 iops : min= 2318, max= 5120, avg=3719.00, stdev=1981.31, samples=2 00:40:52.693 lat (usec) : 250=0.46%, 500=0.78%, 750=0.27%, 1000=0.23% 00:40:52.693 lat (msec) : 2=0.67%, 4=1.24%, 10=24.27%, 20=46.16%, 50=22.73% 00:40:52.693 lat (msec) : 100=2.18%, 250=1.01% 00:40:52.693 cpu : usr=2.97%, sys=5.14%, ctx=355, majf=0, minf=1 00:40:52.693 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:40:52.693 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.693 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:40:52.693 issued rwts: total=3584,3846,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.693 latency : target=0, window=0, percentile=100.00%, depth=128 00:40:52.693 00:40:52.693 Run status group 0 (all jobs): 00:40:52.693 READ: bw=60.8MiB/s (63.8MB/s), 10.0MiB/s-23.9MiB/s (10.5MB/s-25.1MB/s), io=61.6MiB (64.5MB), run=1004-1012msec 00:40:52.693 WRITE: bw=64.6MiB/s (67.7MB/s), 11.9MiB/s-24.2MiB/s (12.5MB/s-25.4MB/s), io=65.4MiB (68.5MB), run=1004-1012msec 00:40:52.693 00:40:52.693 Disk stats (read/write): 00:40:52.693 nvme0n1: ios=3121/3143, merge=0/0, ticks=52621/51286, in_queue=103907, util=85.67% 00:40:52.693 nvme0n2: ios=4872/5120, merge=0/0, ticks=23970/25799, in_queue=49769, util=90.65% 00:40:52.693 nvme0n3: ios=2609/2560, merge=0/0, ticks=38947/34103, in_queue=73050, util=94.70% 00:40:52.693 nvme0n4: ios=3087/3310, merge=0/0, ticks=46042/57225, in_queue=103267, util=93.93% 00:40:52.693 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:40:52.693 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1277155 00:40:52.693 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:40:52.693 06:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:40:52.693 [global] 00:40:52.693 thread=1 00:40:52.693 invalidate=1 00:40:52.693 rw=read 00:40:52.693 time_based=1 00:40:52.693 runtime=10 00:40:52.693 ioengine=libaio 00:40:52.693 direct=1 00:40:52.693 bs=4096 00:40:52.693 iodepth=1 00:40:52.693 norandommap=1 00:40:52.693 numjobs=1 00:40:52.693 00:40:52.693 [job0] 00:40:52.693 filename=/dev/nvme0n1 00:40:52.693 [job1] 00:40:52.693 filename=/dev/nvme0n2 00:40:52.693 [job2] 00:40:52.693 filename=/dev/nvme0n3 00:40:52.693 [job3] 00:40:52.693 filename=/dev/nvme0n4 00:40:52.693 Could not set queue depth (nvme0n1) 00:40:52.693 Could not set queue depth (nvme0n2) 00:40:52.693 Could not set queue depth (nvme0n3) 00:40:52.693 Could not set queue depth (nvme0n4) 00:40:52.950 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:52.950 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:52.950 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:52.950 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:52.950 fio-3.35 00:40:52.950 Starting 4 threads 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:40:56.226 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=610304, buflen=4096 00:40:56.226 fio: pid=1277311, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:40:56.226 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47841280, buflen=4096 00:40:56.226 fio: pid=1277307, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:40:56.226 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=37654528, buflen=4096 00:40:56.226 fio: pid=1277295, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.226 06:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:40:56.484 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=13709312, buflen=4096 00:40:56.484 fio: pid=1277296, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:40:56.484 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.484 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:40:56.484 00:40:56.484 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277295: Fri Dec 13 06:46:48 2024 00:40:56.484 read: IOPS=2946, BW=11.5MiB/s (12.1MB/s)(35.9MiB/3120msec) 00:40:56.484 slat (usec): min=5, max=12800, avg=10.49, stdev=187.44 00:40:56.484 clat (usec): min=172, max=41969, avg=324.85, stdev=2069.66 00:40:56.484 lat (usec): min=179, max=41992, avg=335.33, stdev=2078.70 00:40:56.484 clat percentiles (usec): 00:40:56.484 | 1.00th=[ 192], 5.00th=[ 200], 10.00th=[ 202], 20.00th=[ 206], 00:40:56.484 | 30.00th=[ 208], 40.00th=[ 210], 50.00th=[ 212], 60.00th=[ 217], 00:40:56.484 | 70.00th=[ 219], 80.00th=[ 225], 90.00th=[ 243], 95.00th=[ 269], 00:40:56.484 | 99.00th=[ 289], 99.50th=[ 334], 99.90th=[41157], 99.95th=[41157], 00:40:56.484 | 99.99th=[42206] 00:40:56.484 bw ( KiB/s): min= 1824, max=18136, per=41.35%, avg=12127.33, stdev=6307.30, samples=6 00:40:56.484 iops : min= 456, max= 4534, avg=3031.83, stdev=1576.82, samples=6 00:40:56.484 lat (usec) : 250=91.74%, 500=7.96%, 750=0.01% 00:40:56.484 lat (msec) : 20=0.01%, 50=0.26% 00:40:56.484 cpu : usr=1.96%, sys=3.78%, ctx=9198, majf=0, minf=2 00:40:56.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 issued rwts: total=9194,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:56.484 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277296: Fri Dec 13 06:46:48 2024 00:40:56.484 read: IOPS=1007, BW=4028KiB/s (4124kB/s)(13.1MiB/3324msec) 00:40:56.484 slat (usec): min=5, max=30105, avg=30.20, stdev=671.23 00:40:56.484 clat (usec): min=172, max=42025, avg=954.86, stdev=5366.06 00:40:56.484 lat (usec): min=180, max=42038, avg=985.07, stdev=5406.54 00:40:56.484 clat percentiles (usec): 00:40:56.484 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 200], 00:40:56.484 | 30.00th=[ 229], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:40:56.484 | 70.00th=[ 251], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 281], 00:40:56.484 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[42206], 00:40:56.484 | 99.99th=[42206] 00:40:56.484 bw ( KiB/s): min= 104, max=11023, per=9.84%, avg=2886.50, stdev=4314.12, samples=6 00:40:56.484 iops : min= 26, max= 2755, avg=721.50, stdev=1078.25, samples=6 00:40:56.484 lat (usec) : 250=70.10%, 500=28.02%, 750=0.09% 00:40:56.484 lat (msec) : 50=1.76% 00:40:56.484 cpu : usr=0.18%, sys=1.17%, ctx=3353, majf=0, minf=2 00:40:56.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 issued rwts: total=3348,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:56.484 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277307: Fri Dec 13 06:46:48 2024 00:40:56.484 read: IOPS=4061, BW=15.9MiB/s (16.6MB/s)(45.6MiB/2876msec) 00:40:56.484 slat (nsec): min=6762, max=40807, avg=7974.68, stdev=1275.03 00:40:56.484 clat (usec): min=187, max=40612, avg=234.59, stdev=374.64 00:40:56.484 lat (usec): min=196, max=40619, avg=242.57, stdev=374.65 00:40:56.484 clat percentiles (usec): 00:40:56.484 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 210], 00:40:56.484 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:40:56.484 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:40:56.484 | 99.00th=[ 408], 99.50th=[ 412], 99.90th=[ 420], 99.95th=[ 429], 00:40:56.484 | 99.99th=[ 506] 00:40:56.484 bw ( KiB/s): min=13648, max=18272, per=55.52%, avg=16281.60, stdev=1661.74, samples=5 00:40:56.484 iops : min= 3412, max= 4568, avg=4070.40, stdev=415.44, samples=5 00:40:56.484 lat (usec) : 250=89.26%, 500=10.72%, 750=0.01% 00:40:56.484 lat (msec) : 50=0.01% 00:40:56.484 cpu : usr=1.98%, sys=6.64%, ctx=11681, majf=0, minf=2 00:40:56.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 issued rwts: total=11681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:56.484 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1277311: Fri Dec 13 06:46:48 2024 00:40:56.484 read: IOPS=56, BW=223KiB/s (228kB/s)(596KiB/2673msec) 00:40:56.484 slat (nsec): min=7308, max=55172, avg=14630.11, stdev=7705.44 00:40:56.484 clat (usec): min=228, max=42018, avg=17776.94, stdev=20240.58 00:40:56.484 lat (usec): min=238, max=42040, avg=17791.51, stdev=20244.73 00:40:56.484 clat percentiles (usec): 00:40:56.484 | 1.00th=[ 237], 5.00th=[ 243], 10.00th=[ 243], 20.00th=[ 247], 00:40:56.484 | 30.00th=[ 262], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[40633], 00:40:56.484 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:40:56.484 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:40:56.484 | 99.99th=[42206] 00:40:56.484 bw ( KiB/s): min= 104, max= 424, per=0.79%, avg=231.80, stdev=121.67, samples=5 00:40:56.484 iops : min= 26, max= 106, avg=57.80, stdev=30.38, samples=5 00:40:56.484 lat (usec) : 250=22.67%, 500=33.33%, 750=0.67% 00:40:56.484 lat (msec) : 50=42.67% 00:40:56.484 cpu : usr=0.19%, sys=0.00%, ctx=151, majf=0, minf=1 00:40:56.484 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:56.484 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 complete : 0=0.7%, 4=99.3%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:56.484 issued rwts: total=150,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:56.484 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:56.484 00:40:56.484 Run status group 0 (all jobs): 00:40:56.484 READ: bw=28.6MiB/s (30.0MB/s), 223KiB/s-15.9MiB/s (228kB/s-16.6MB/s), io=95.2MiB (99.8MB), run=2673-3324msec 00:40:56.484 00:40:56.484 Disk stats (read/write): 00:40:56.484 nvme0n1: ios=9191/0, merge=0/0, ticks=2865/0, in_queue=2865, util=93.53% 00:40:56.484 nvme0n2: ios=3383/0, merge=0/0, ticks=3796/0, in_queue=3796, util=97.06% 00:40:56.484 nvme0n3: ios=11485/0, merge=0/0, ticks=2545/0, in_queue=2545, util=96.20% 00:40:56.484 nvme0n4: ios=146/0, merge=0/0, ticks=2528/0, in_queue=2528, util=96.42% 00:40:56.742 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.742 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:40:56.999 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.999 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:40:56.999 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:56.999 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:40:57.256 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:40:57.256 06:46:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1277155 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:57.513 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:57.513 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:40:57.771 nvmf hotplug test: fio failed as expected 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:57.771 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:57.771 rmmod nvme_tcp 00:40:58.029 rmmod nvme_fabrics 00:40:58.029 rmmod nvme_keyring 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 1274737 ']' 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 1274737 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1274737 ']' 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1274737 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1274737 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1274737' 00:40:58.029 killing process with pid 1274737 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1274737 00:40:58.029 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1274737 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:58.287 06:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:00.190 00:41:00.190 real 0m25.830s 00:41:00.190 user 1m30.864s 00:41:00.190 sys 0m11.022s 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:00.190 ************************************ 00:41:00.190 END TEST nvmf_fio_target 00:41:00.190 ************************************ 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:00.190 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:00.190 ************************************ 00:41:00.190 START TEST nvmf_bdevio 00:41:00.190 ************************************ 00:41:00.448 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:00.448 * Looking for test storage... 00:41:00.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:00.448 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:00.448 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:00.448 06:46:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:00.448 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.449 --rc genhtml_branch_coverage=1 00:41:00.449 --rc genhtml_function_coverage=1 00:41:00.449 --rc genhtml_legend=1 00:41:00.449 --rc geninfo_all_blocks=1 00:41:00.449 --rc geninfo_unexecuted_blocks=1 00:41:00.449 00:41:00.449 ' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.449 --rc genhtml_branch_coverage=1 00:41:00.449 --rc genhtml_function_coverage=1 00:41:00.449 --rc genhtml_legend=1 00:41:00.449 --rc geninfo_all_blocks=1 00:41:00.449 --rc geninfo_unexecuted_blocks=1 00:41:00.449 00:41:00.449 ' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.449 --rc genhtml_branch_coverage=1 00:41:00.449 --rc genhtml_function_coverage=1 00:41:00.449 --rc genhtml_legend=1 00:41:00.449 --rc geninfo_all_blocks=1 00:41:00.449 --rc geninfo_unexecuted_blocks=1 00:41:00.449 00:41:00.449 ' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:00.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:00.449 --rc genhtml_branch_coverage=1 00:41:00.449 --rc genhtml_function_coverage=1 00:41:00.449 --rc genhtml_legend=1 00:41:00.449 --rc geninfo_all_blocks=1 00:41:00.449 --rc geninfo_unexecuted_blocks=1 00:41:00.449 00:41:00.449 ' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:00.449 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:00.450 06:46:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:07.094 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:07.094 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.094 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:07.095 Found net devices under 0000:af:00.0: cvl_0_0 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:07.095 Found net devices under 0000:af:00.1: cvl_0_1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:07.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:07.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.306 ms 00:41:07.095 00:41:07.095 --- 10.0.0.2 ping statistics --- 00:41:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.095 rtt min/avg/max/mdev = 0.306/0.306/0.306/0.000 ms 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:07.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:07.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:41:07.095 00:41:07.095 --- 10.0.0.1 ping statistics --- 00:41:07.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:07.095 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=1281573 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 1281573 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1281573 ']' 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:07.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:07.095 06:46:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.095 [2024-12-13 06:46:57.918683] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:07.095 [2024-12-13 06:46:57.919585] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:07.095 [2024-12-13 06:46:57.919629] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:07.095 [2024-12-13 06:46:57.998060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:07.095 [2024-12-13 06:46:58.020574] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:07.095 [2024-12-13 06:46:58.020612] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:07.095 [2024-12-13 06:46:58.020619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:07.095 [2024-12-13 06:46:58.020625] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:07.095 [2024-12-13 06:46:58.020630] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:07.095 [2024-12-13 06:46:58.024466] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:07.095 [2024-12-13 06:46:58.024568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:07.095 [2024-12-13 06:46:58.024814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:07.095 [2024-12-13 06:46:58.024815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:07.095 [2024-12-13 06:46:58.086084] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:07.095 [2024-12-13 06:46:58.086668] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:07.095 [2024-12-13 06:46:58.087295] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:07.095 [2024-12-13 06:46:58.087458] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:07.095 [2024-12-13 06:46:58.087583] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:07.095 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.096 [2024-12-13 06:46:58.157519] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.096 Malloc0 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.096 [2024-12-13 06:46:58.245732] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:07.096 { 00:41:07.096 "params": { 00:41:07.096 "name": "Nvme$subsystem", 00:41:07.096 "trtype": "$TEST_TRANSPORT", 00:41:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:07.096 "adrfam": "ipv4", 00:41:07.096 "trsvcid": "$NVMF_PORT", 00:41:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:07.096 "hdgst": ${hdgst:-false}, 00:41:07.096 "ddgst": ${ddgst:-false} 00:41:07.096 }, 00:41:07.096 "method": "bdev_nvme_attach_controller" 00:41:07.096 } 00:41:07.096 EOF 00:41:07.096 )") 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:07.096 06:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:07.096 "params": { 00:41:07.096 "name": "Nvme1", 00:41:07.096 "trtype": "tcp", 00:41:07.096 "traddr": "10.0.0.2", 00:41:07.096 "adrfam": "ipv4", 00:41:07.096 "trsvcid": "4420", 00:41:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:07.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:07.096 "hdgst": false, 00:41:07.096 "ddgst": false 00:41:07.096 }, 00:41:07.096 "method": "bdev_nvme_attach_controller" 00:41:07.096 }' 00:41:07.096 [2024-12-13 06:46:58.298392] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:07.096 [2024-12-13 06:46:58.298438] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1281692 ] 00:41:07.096 [2024-12-13 06:46:58.374216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:07.096 [2024-12-13 06:46:58.399092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:07.096 [2024-12-13 06:46:58.399200] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:07.096 [2024-12-13 06:46:58.399201] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:07.096 I/O targets: 00:41:07.096 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:07.096 00:41:07.096 00:41:07.096 CUnit - A unit testing framework for C - Version 2.1-3 00:41:07.096 http://cunit.sourceforge.net/ 00:41:07.096 00:41:07.096 00:41:07.096 Suite: bdevio tests on: Nvme1n1 00:41:07.096 Test: blockdev write read block ...passed 00:41:07.096 Test: blockdev write zeroes read block ...passed 00:41:07.096 Test: blockdev write zeroes read no split ...passed 00:41:07.096 Test: blockdev write zeroes read split ...passed 00:41:07.096 Test: blockdev write zeroes read split partial ...passed 00:41:07.096 Test: blockdev reset ...[2024-12-13 06:46:58.731818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:07.096 [2024-12-13 06:46:58.731876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2174340 (9): Bad file descriptor 00:41:07.354 [2024-12-13 06:46:58.776215] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:07.354 passed 00:41:07.354 Test: blockdev write read 8 blocks ...passed 00:41:07.354 Test: blockdev write read size > 128k ...passed 00:41:07.354 Test: blockdev write read invalid size ...passed 00:41:07.354 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:07.354 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:07.354 Test: blockdev write read max offset ...passed 00:41:07.354 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:07.354 Test: blockdev writev readv 8 blocks ...passed 00:41:07.354 Test: blockdev writev readv 30 x 1block ...passed 00:41:07.612 Test: blockdev writev readv block ...passed 00:41:07.612 Test: blockdev writev readv size > 128k ...passed 00:41:07.612 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:07.612 Test: blockdev comparev and writev ...[2024-12-13 06:46:59.031277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.031322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.031619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.031641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.031937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.031957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.031964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.032252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.032266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.032277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:07.612 [2024-12-13 06:46:59.032284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:07.612 passed 00:41:07.612 Test: blockdev nvme passthru rw ...passed 00:41:07.612 Test: blockdev nvme passthru vendor specific ...[2024-12-13 06:46:59.113793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:07.612 [2024-12-13 06:46:59.113812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.113926] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:07.612 [2024-12-13 06:46:59.113936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.114048] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:07.612 [2024-12-13 06:46:59.114057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:07.612 [2024-12-13 06:46:59.114173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:07.612 [2024-12-13 06:46:59.114182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:07.612 passed 00:41:07.612 Test: blockdev nvme admin passthru ...passed 00:41:07.612 Test: blockdev copy ...passed 00:41:07.612 00:41:07.612 Run Summary: Type Total Ran Passed Failed Inactive 00:41:07.612 suites 1 1 n/a 0 0 00:41:07.612 tests 23 23 23 0 0 00:41:07.612 asserts 152 152 152 0 n/a 00:41:07.612 00:41:07.612 Elapsed time = 1.183 seconds 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:07.871 rmmod nvme_tcp 00:41:07.871 rmmod nvme_fabrics 00:41:07.871 rmmod nvme_keyring 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 1281573 ']' 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 1281573 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1281573 ']' 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1281573 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1281573 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1281573' 00:41:07.871 killing process with pid 1281573 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1281573 00:41:07.871 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1281573 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:08.130 06:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:10.666 00:41:10.666 real 0m9.852s 00:41:10.666 user 0m8.647s 00:41:10.666 sys 0m5.177s 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:10.666 ************************************ 00:41:10.666 END TEST nvmf_bdevio 00:41:10.666 ************************************ 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:10.666 00:41:10.666 real 4m29.015s 00:41:10.666 user 9m0.036s 00:41:10.666 sys 1m48.778s 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:10.666 06:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:10.666 ************************************ 00:41:10.666 END TEST nvmf_target_core_interrupt_mode 00:41:10.666 ************************************ 00:41:10.666 06:47:01 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:10.666 06:47:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:10.666 06:47:01 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:10.666 06:47:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:10.666 ************************************ 00:41:10.666 START TEST nvmf_interrupt 00:41:10.666 ************************************ 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:10.666 * Looking for test storage... 00:41:10.666 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:10.666 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.667 --rc genhtml_branch_coverage=1 00:41:10.667 --rc genhtml_function_coverage=1 00:41:10.667 --rc genhtml_legend=1 00:41:10.667 --rc geninfo_all_blocks=1 00:41:10.667 --rc geninfo_unexecuted_blocks=1 00:41:10.667 00:41:10.667 ' 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.667 --rc genhtml_branch_coverage=1 00:41:10.667 --rc genhtml_function_coverage=1 00:41:10.667 --rc genhtml_legend=1 00:41:10.667 --rc geninfo_all_blocks=1 00:41:10.667 --rc geninfo_unexecuted_blocks=1 00:41:10.667 00:41:10.667 ' 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.667 --rc genhtml_branch_coverage=1 00:41:10.667 --rc genhtml_function_coverage=1 00:41:10.667 --rc genhtml_legend=1 00:41:10.667 --rc geninfo_all_blocks=1 00:41:10.667 --rc geninfo_unexecuted_blocks=1 00:41:10.667 00:41:10.667 ' 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:10.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:10.667 --rc genhtml_branch_coverage=1 00:41:10.667 --rc genhtml_function_coverage=1 00:41:10.667 --rc genhtml_legend=1 00:41:10.667 --rc geninfo_all_blocks=1 00:41:10.667 --rc geninfo_unexecuted_blocks=1 00:41:10.667 00:41:10.667 ' 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:10.667 06:47:01 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:10.667 06:47:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:15.941 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:15.941 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:15.941 Found net devices under 0000:af:00.0: cvl_0_0 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:15.941 Found net devices under 0000:af:00.1: cvl_0_1 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:15.941 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:16.200 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:16.200 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:41:16.200 00:41:16.200 --- 10.0.0.2 ping statistics --- 00:41:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.200 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:16.200 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:16.200 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:41:16.200 00:41:16.200 --- 10.0.0.1 ping statistics --- 00:41:16.200 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:16.200 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:16.200 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=1285201 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 1285201 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1285201 ']' 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:16.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:16.459 06:47:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.459 [2024-12-13 06:47:07.923593] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:16.459 [2024-12-13 06:47:07.924489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:16.459 [2024-12-13 06:47:07.924524] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:16.459 [2024-12-13 06:47:08.004609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:16.459 [2024-12-13 06:47:08.026158] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:16.459 [2024-12-13 06:47:08.026194] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:16.459 [2024-12-13 06:47:08.026201] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:16.459 [2024-12-13 06:47:08.026206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:16.459 [2024-12-13 06:47:08.026211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:16.459 [2024-12-13 06:47:08.027325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:16.459 [2024-12-13 06:47:08.027327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:16.459 [2024-12-13 06:47:08.089784] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:16.460 [2024-12-13 06:47:08.090348] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:16.460 [2024-12-13 06:47:08.090553] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:16.719 5000+0 records in 00:41:16.719 5000+0 records out 00:41:16.719 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0175176 s, 585 MB/s 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 AIO0 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 [2024-12-13 06:47:08.236143] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:16.719 [2024-12-13 06:47:08.276488] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285201 0 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 0 idle 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:16.719 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285201 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0' 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285201 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.22 reactor_0 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1285201 1 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 1 idle 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:16.978 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285244 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285244 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1285429 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285201 0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285201 0 busy 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285201 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.41 reactor_0' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285201 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.41 reactor_0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1285201 1 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1285201 1 busy 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:17.236 06:47:08 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285244 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.28 reactor_1' 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285244 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:00.28 reactor_1 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:17.493 06:47:09 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1285429 00:41:27.452 Initializing NVMe Controllers 00:41:27.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:27.452 Controller IO queue size 256, less than required. 00:41:27.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:27.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:27.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:27.452 Initialization complete. Launching workers. 00:41:27.452 ======================================================== 00:41:27.452 Latency(us) 00:41:27.452 Device Information : IOPS MiB/s Average min max 00:41:27.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16952.80 66.22 15106.58 5510.70 23289.75 00:41:27.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16798.00 65.62 15244.07 7461.09 28176.66 00:41:27.452 ======================================================== 00:41:27.452 Total : 33750.80 131.84 15175.01 5510.70 28176.66 00:41:27.452 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285201 0 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 0 idle 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285201 root 20 0 128.2g 46848 33792 S 6.2 0.1 0:20.23 reactor_0' 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285201 root 20 0 128.2g 46848 33792 S 6.2 0.1 0:20.23 reactor_0 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1285201 1 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 1 idle 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:27.452 06:47:18 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285244 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:09.99 reactor_1' 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285244 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:09.99 reactor_1 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:27.712 06:47:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:27.971 06:47:19 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:27.971 06:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:27.971 06:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:27.971 06:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:27.971 06:47:19 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285201 0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 0 idle 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285201 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285201 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.48 reactor_0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1285201 1 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1285201 1 idle 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1285201 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1285201 -w 256 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1285244 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1285244 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.10 reactor_1 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:30.504 06:47:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:30.504 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:30.504 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:30.504 rmmod nvme_tcp 00:41:30.504 rmmod nvme_fabrics 00:41:30.762 rmmod nvme_keyring 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 1285201 ']' 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 1285201 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1285201 ']' 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1285201 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:30.762 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1285201 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1285201' 00:41:30.763 killing process with pid 1285201 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1285201 00:41:30.763 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1285201 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:31.022 06:47:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:32.926 06:47:24 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:32.926 00:41:32.926 real 0m22.740s 00:41:32.926 user 0m39.839s 00:41:32.926 sys 0m8.166s 00:41:32.926 06:47:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:32.926 06:47:24 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:32.926 ************************************ 00:41:32.926 END TEST nvmf_interrupt 00:41:32.926 ************************************ 00:41:33.185 00:41:33.185 real 35m18.453s 00:41:33.185 user 85m49.438s 00:41:33.185 sys 10m19.330s 00:41:33.185 06:47:24 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:33.185 06:47:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.185 ************************************ 00:41:33.185 END TEST nvmf_tcp 00:41:33.185 ************************************ 00:41:33.185 06:47:24 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:33.185 06:47:24 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:33.185 06:47:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:33.185 06:47:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:33.185 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:41:33.185 ************************************ 00:41:33.185 START TEST spdkcli_nvmf_tcp 00:41:33.185 ************************************ 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:33.185 * Looking for test storage... 00:41:33.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:33.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.185 --rc genhtml_branch_coverage=1 00:41:33.185 --rc genhtml_function_coverage=1 00:41:33.185 --rc genhtml_legend=1 00:41:33.185 --rc geninfo_all_blocks=1 00:41:33.185 --rc geninfo_unexecuted_blocks=1 00:41:33.185 00:41:33.185 ' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:33.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.185 --rc genhtml_branch_coverage=1 00:41:33.185 --rc genhtml_function_coverage=1 00:41:33.185 --rc genhtml_legend=1 00:41:33.185 --rc geninfo_all_blocks=1 00:41:33.185 --rc geninfo_unexecuted_blocks=1 00:41:33.185 00:41:33.185 ' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:33.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.185 --rc genhtml_branch_coverage=1 00:41:33.185 --rc genhtml_function_coverage=1 00:41:33.185 --rc genhtml_legend=1 00:41:33.185 --rc geninfo_all_blocks=1 00:41:33.185 --rc geninfo_unexecuted_blocks=1 00:41:33.185 00:41:33.185 ' 00:41:33.185 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:33.185 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:33.185 --rc genhtml_branch_coverage=1 00:41:33.185 --rc genhtml_function_coverage=1 00:41:33.185 --rc genhtml_legend=1 00:41:33.185 --rc geninfo_all_blocks=1 00:41:33.185 --rc geninfo_unexecuted_blocks=1 00:41:33.186 00:41:33.186 ' 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:33.186 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:33.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1288058 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1288058 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1288058 ']' 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:33.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:33.444 06:47:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.444 [2024-12-13 06:47:24.915860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:41:33.444 [2024-12-13 06:47:24.915908] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1288058 ] 00:41:33.444 [2024-12-13 06:47:24.991959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:33.444 [2024-12-13 06:47:25.016120] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:33.444 [2024-12-13 06:47:25.016123] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:33.701 06:47:25 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:33.701 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:33.701 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:33.701 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:33.701 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:33.701 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:33.701 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:33.701 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:33.701 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:33.701 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:33.701 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:33.701 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:33.701 ' 00:41:36.225 [2024-12-13 06:47:27.830734] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:37.596 [2024-12-13 06:47:29.167153] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:41:40.119 [2024-12-13 06:47:31.650873] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:41:42.643 [2024-12-13 06:47:33.801501] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:41:44.015 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:41:44.015 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:41:44.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:44.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:44.015 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:41:44.015 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:41:44.015 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:41:44.015 06:47:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:41:44.579 06:47:35 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:41:44.579 06:47:36 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:44.580 06:47:36 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:41:44.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:41:44.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:44.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:41:44.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:41:44.580 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:41:44.580 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:41:44.580 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:41:44.580 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:41:44.580 ' 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:41:51.132 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:41:51.132 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:41:51.132 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:41:51.132 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1288058 ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1288058' 00:41:51.132 killing process with pid 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1288058 ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1288058 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1288058 ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1288058 00:41:51.132 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1288058) - No such process 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1288058 is not found' 00:41:51.132 Process with pid 1288058 is not found 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:41:51.132 00:41:51.132 real 0m17.280s 00:41:51.132 user 0m38.101s 00:41:51.132 sys 0m0.766s 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:51.132 06:47:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:51.132 ************************************ 00:41:51.132 END TEST spdkcli_nvmf_tcp 00:41:51.132 ************************************ 00:41:51.132 06:47:41 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:51.132 06:47:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:51.132 06:47:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:51.132 06:47:41 -- common/autotest_common.sh@10 -- # set +x 00:41:51.132 ************************************ 00:41:51.132 START TEST nvmf_identify_passthru 00:41:51.132 ************************************ 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:41:51.132 * Looking for test storage... 00:41:51.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:51.132 06:47:42 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.132 --rc genhtml_branch_coverage=1 00:41:51.132 --rc genhtml_function_coverage=1 00:41:51.132 --rc genhtml_legend=1 00:41:51.132 --rc geninfo_all_blocks=1 00:41:51.132 --rc geninfo_unexecuted_blocks=1 00:41:51.132 00:41:51.132 ' 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.132 --rc genhtml_branch_coverage=1 00:41:51.132 --rc genhtml_function_coverage=1 00:41:51.132 --rc genhtml_legend=1 00:41:51.132 --rc geninfo_all_blocks=1 00:41:51.132 --rc geninfo_unexecuted_blocks=1 00:41:51.132 00:41:51.132 ' 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.132 --rc genhtml_branch_coverage=1 00:41:51.132 --rc genhtml_function_coverage=1 00:41:51.132 --rc genhtml_legend=1 00:41:51.132 --rc geninfo_all_blocks=1 00:41:51.132 --rc geninfo_unexecuted_blocks=1 00:41:51.132 00:41:51.132 ' 00:41:51.132 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:51.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:51.132 --rc genhtml_branch_coverage=1 00:41:51.132 --rc genhtml_function_coverage=1 00:41:51.132 --rc genhtml_legend=1 00:41:51.132 --rc geninfo_all_blocks=1 00:41:51.132 --rc geninfo_unexecuted_blocks=1 00:41:51.132 00:41:51.132 ' 00:41:51.132 06:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:51.132 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:41:51.132 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:51.132 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:51.132 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:51.132 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:51.133 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:51.133 06:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:51.133 06:47:42 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:41:51.133 06:47:42 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:51.133 06:47:42 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.133 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:41:51.133 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:51.133 06:47:42 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:41:51.133 06:47:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:56.410 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:56.410 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:56.410 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:56.411 Found net devices under 0000:af:00.0: cvl_0_0 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:56.411 Found net devices under 0000:af:00.1: cvl_0_1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:56.411 06:47:47 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:56.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:56.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.363 ms 00:41:56.411 00:41:56.411 --- 10.0.0.2 ping statistics --- 00:41:56.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.411 rtt min/avg/max/mdev = 0.363/0.363/0.363/0.000 ms 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:56.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:56.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:41:56.411 00:41:56.411 --- 10.0.0.1 ping statistics --- 00:41:56.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:56.411 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:56.411 06:47:48 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:56.411 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:41:56.411 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:56.411 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:41:56.411 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:41:56.411 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:41:56.411 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:41:56.411 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:41:56.670 06:47:48 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:41:56.670 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:41:56.670 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:41:56.670 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:41:56.670 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:41:56.670 06:47:48 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:00.854 06:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:00.854 06:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:00.854 06:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:00.854 06:47:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1295145 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:05.039 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1295145 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1295145 ']' 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:05.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:05.039 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.039 [2024-12-13 06:47:56.586845] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:05.039 [2024-12-13 06:47:56.586893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:05.039 [2024-12-13 06:47:56.666717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:05.039 [2024-12-13 06:47:56.690202] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:05.039 [2024-12-13 06:47:56.690240] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:05.039 [2024-12-13 06:47:56.690248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:05.039 [2024-12-13 06:47:56.690255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:05.039 [2024-12-13 06:47:56.690260] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:05.039 [2024-12-13 06:47:56.691612] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:05.039 [2024-12-13 06:47:56.691731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:05.039 [2024-12-13 06:47:56.691767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:05.039 [2024-12-13 06:47:56.691768] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:05.297 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.297 INFO: Log level set to 20 00:42:05.297 INFO: Requests: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "method": "nvmf_set_config", 00:42:05.297 "id": 1, 00:42:05.297 "params": { 00:42:05.297 "admin_cmd_passthru": { 00:42:05.297 "identify_ctrlr": true 00:42:05.297 } 00:42:05.297 } 00:42:05.297 } 00:42:05.297 00:42:05.297 INFO: response: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "id": 1, 00:42:05.297 "result": true 00:42:05.297 } 00:42:05.297 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.297 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.297 INFO: Setting log level to 20 00:42:05.297 INFO: Setting log level to 20 00:42:05.297 INFO: Log level set to 20 00:42:05.297 INFO: Log level set to 20 00:42:05.297 INFO: Requests: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "method": "framework_start_init", 00:42:05.297 "id": 1 00:42:05.297 } 00:42:05.297 00:42:05.297 INFO: Requests: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "method": "framework_start_init", 00:42:05.297 "id": 1 00:42:05.297 } 00:42:05.297 00:42:05.297 [2024-12-13 06:47:56.810048] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:05.297 INFO: response: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "id": 1, 00:42:05.297 "result": true 00:42:05.297 } 00:42:05.297 00:42:05.297 INFO: response: 00:42:05.297 { 00:42:05.297 "jsonrpc": "2.0", 00:42:05.297 "id": 1, 00:42:05.297 "result": true 00:42:05.297 } 00:42:05.297 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.297 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.297 INFO: Setting log level to 40 00:42:05.297 INFO: Setting log level to 40 00:42:05.297 INFO: Setting log level to 40 00:42:05.297 [2024-12-13 06:47:56.823312] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:05.297 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:05.297 06:47:56 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:05.297 06:47:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.572 Nvme0n1 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.572 [2024-12-13 06:47:59.736803] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.572 [ 00:42:08.572 { 00:42:08.572 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:08.572 "subtype": "Discovery", 00:42:08.572 "listen_addresses": [], 00:42:08.572 "allow_any_host": true, 00:42:08.572 "hosts": [] 00:42:08.572 }, 00:42:08.572 { 00:42:08.572 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:08.572 "subtype": "NVMe", 00:42:08.572 "listen_addresses": [ 00:42:08.572 { 00:42:08.572 "trtype": "TCP", 00:42:08.572 "adrfam": "IPv4", 00:42:08.572 "traddr": "10.0.0.2", 00:42:08.572 "trsvcid": "4420" 00:42:08.572 } 00:42:08.572 ], 00:42:08.572 "allow_any_host": true, 00:42:08.572 "hosts": [], 00:42:08.572 "serial_number": "SPDK00000000000001", 00:42:08.572 "model_number": "SPDK bdev Controller", 00:42:08.572 "max_namespaces": 1, 00:42:08.572 "min_cntlid": 1, 00:42:08.572 "max_cntlid": 65519, 00:42:08.572 "namespaces": [ 00:42:08.572 { 00:42:08.572 "nsid": 1, 00:42:08.572 "bdev_name": "Nvme0n1", 00:42:08.572 "name": "Nvme0n1", 00:42:08.572 "nguid": "C641F22FEC5840B7BB636DD7E7867B78", 00:42:08.572 "uuid": "c641f22f-ec58-40b7-bb63-6dd7e7867b78" 00:42:08.572 } 00:42:08.572 ] 00:42:08.572 } 00:42:08.572 ] 00:42:08.572 06:47:59 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:08.572 06:47:59 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:08.572 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:08.572 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:08.572 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:08.572 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:08.830 06:48:00 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:08.830 rmmod nvme_tcp 00:42:08.830 rmmod nvme_fabrics 00:42:08.830 rmmod nvme_keyring 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 1295145 ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 1295145 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1295145 ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1295145 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1295145 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1295145' 00:42:08.830 killing process with pid 1295145 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1295145 00:42:08.830 06:48:00 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1295145 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:10.728 06:48:01 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:10.728 06:48:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:10.728 06:48:01 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.759 06:48:03 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:12.759 00:42:12.759 real 0m21.974s 00:42:12.759 user 0m28.371s 00:42:12.759 sys 0m5.308s 00:42:12.759 06:48:03 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:12.759 06:48:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:12.759 ************************************ 00:42:12.759 END TEST nvmf_identify_passthru 00:42:12.759 ************************************ 00:42:12.759 06:48:04 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:12.759 06:48:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:12.759 06:48:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:12.759 06:48:04 -- common/autotest_common.sh@10 -- # set +x 00:42:12.759 ************************************ 00:42:12.759 START TEST nvmf_dif 00:42:12.759 ************************************ 00:42:12.759 06:48:04 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:12.759 * Looking for test storage... 00:42:12.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:12.759 06:48:04 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:12.759 06:48:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:12.759 06:48:04 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:12.759 06:48:04 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:12.759 06:48:04 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:12.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.760 --rc genhtml_branch_coverage=1 00:42:12.760 --rc genhtml_function_coverage=1 00:42:12.760 --rc genhtml_legend=1 00:42:12.760 --rc geninfo_all_blocks=1 00:42:12.760 --rc geninfo_unexecuted_blocks=1 00:42:12.760 00:42:12.760 ' 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:12.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.760 --rc genhtml_branch_coverage=1 00:42:12.760 --rc genhtml_function_coverage=1 00:42:12.760 --rc genhtml_legend=1 00:42:12.760 --rc geninfo_all_blocks=1 00:42:12.760 --rc geninfo_unexecuted_blocks=1 00:42:12.760 00:42:12.760 ' 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:12.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.760 --rc genhtml_branch_coverage=1 00:42:12.760 --rc genhtml_function_coverage=1 00:42:12.760 --rc genhtml_legend=1 00:42:12.760 --rc geninfo_all_blocks=1 00:42:12.760 --rc geninfo_unexecuted_blocks=1 00:42:12.760 00:42:12.760 ' 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:12.760 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:12.760 --rc genhtml_branch_coverage=1 00:42:12.760 --rc genhtml_function_coverage=1 00:42:12.760 --rc genhtml_legend=1 00:42:12.760 --rc geninfo_all_blocks=1 00:42:12.760 --rc geninfo_unexecuted_blocks=1 00:42:12.760 00:42:12.760 ' 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:12.760 06:48:04 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:12.760 06:48:04 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.760 06:48:04 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.760 06:48:04 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.760 06:48:04 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:12.760 06:48:04 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:12.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:12.760 06:48:04 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:12.760 06:48:04 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:12.760 06:48:04 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:19.330 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:19.330 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:19.330 Found net devices under 0000:af:00.0: cvl_0_0 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:19.330 Found net devices under 0000:af:00.1: cvl_0_1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:19.330 06:48:09 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:19.330 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:19.330 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:42:19.330 00:42:19.330 --- 10.0.0.2 ping statistics --- 00:42:19.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.330 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:19.330 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:19.330 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:42:19.330 00:42:19.330 --- 10.0.0.1 ping statistics --- 00:42:19.330 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:19.330 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:19.330 06:48:10 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:21.235 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:21.235 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:21.235 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:21.235 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:21.235 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:21.235 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:21.236 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:21.236 06:48:12 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:21.236 06:48:12 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=1301036 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:21.236 06:48:12 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 1301036 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1301036 ']' 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:21.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:21.236 06:48:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.494 [2024-12-13 06:48:12.915434] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:42:21.494 [2024-12-13 06:48:12.915494] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:21.494 [2024-12-13 06:48:12.996347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:21.494 [2024-12-13 06:48:13.017790] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:21.494 [2024-12-13 06:48:13.017827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:21.494 [2024-12-13 06:48:13.017833] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:21.494 [2024-12-13 06:48:13.017840] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:21.494 [2024-12-13 06:48:13.017845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:21.494 [2024-12-13 06:48:13.018344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:21.494 06:48:13 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.494 06:48:13 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:21.494 06:48:13 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:21.494 06:48:13 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.494 06:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.753 [2024-12-13 06:48:13.149886] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:21.753 06:48:13 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.753 06:48:13 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:21.753 06:48:13 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:21.753 06:48:13 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:21.753 06:48:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:21.753 ************************************ 00:42:21.753 START TEST fio_dif_1_default 00:42:21.753 ************************************ 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:21.753 bdev_null0 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.753 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:21.754 [2024-12-13 06:48:13.222235] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:21.754 { 00:42:21.754 "params": { 00:42:21.754 "name": "Nvme$subsystem", 00:42:21.754 "trtype": "$TEST_TRANSPORT", 00:42:21.754 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:21.754 "adrfam": "ipv4", 00:42:21.754 "trsvcid": "$NVMF_PORT", 00:42:21.754 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:21.754 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:21.754 "hdgst": ${hdgst:-false}, 00:42:21.754 "ddgst": ${ddgst:-false} 00:42:21.754 }, 00:42:21.754 "method": "bdev_nvme_attach_controller" 00:42:21.754 } 00:42:21.754 EOF 00:42:21.754 )") 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:21.754 "params": { 00:42:21.754 "name": "Nvme0", 00:42:21.754 "trtype": "tcp", 00:42:21.754 "traddr": "10.0.0.2", 00:42:21.754 "adrfam": "ipv4", 00:42:21.754 "trsvcid": "4420", 00:42:21.754 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:21.754 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:21.754 "hdgst": false, 00:42:21.754 "ddgst": false 00:42:21.754 }, 00:42:21.754 "method": "bdev_nvme_attach_controller" 00:42:21.754 }' 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:21.754 06:48:13 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:22.012 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:22.012 fio-3.35 00:42:22.012 Starting 1 thread 00:42:34.219 00:42:34.219 filename0: (groupid=0, jobs=1): err= 0: pid=1301400: Fri Dec 13 06:48:24 2024 00:42:34.219 read: IOPS=195, BW=782KiB/s (801kB/s)(7824KiB/10002msec) 00:42:34.219 slat (nsec): min=5899, max=26514, avg=6192.41, stdev=1061.52 00:42:34.219 clat (usec): min=386, max=45129, avg=20435.29, stdev=20539.17 00:42:34.219 lat (usec): min=392, max=45155, avg=20441.48, stdev=20539.13 00:42:34.219 clat percentiles (usec): 00:42:34.219 | 1.00th=[ 404], 5.00th=[ 445], 10.00th=[ 453], 20.00th=[ 474], 00:42:34.219 | 30.00th=[ 586], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[41157], 00:42:34.219 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:42:34.219 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45351], 99.95th=[45351], 00:42:34.219 | 99.99th=[45351] 00:42:34.219 bw ( KiB/s): min= 704, max= 896, per=100.00%, avg=784.84, stdev=55.80, samples=19 00:42:34.219 iops : min= 176, max= 224, avg=196.21, stdev=13.95, samples=19 00:42:34.219 lat (usec) : 500=26.64%, 750=24.90% 00:42:34.219 lat (msec) : 50=48.47% 00:42:34.219 cpu : usr=92.32%, sys=7.42%, ctx=16, majf=0, minf=0 00:42:34.219 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:34.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:34.219 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:34.219 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:34.219 00:42:34.219 Run status group 0 (all jobs): 00:42:34.219 READ: bw=782KiB/s (801kB/s), 782KiB/s-782KiB/s (801kB/s-801kB/s), io=7824KiB (8012kB), run=10002-10002msec 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.219 00:42:34.219 real 0m11.167s 00:42:34.219 user 0m16.007s 00:42:34.219 sys 0m1.035s 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:34.219 06:48:24 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:34.219 ************************************ 00:42:34.219 END TEST fio_dif_1_default 00:42:34.219 ************************************ 00:42:34.219 06:48:24 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:34.219 06:48:24 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:34.219 06:48:24 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:34.219 06:48:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:34.219 ************************************ 00:42:34.220 START TEST fio_dif_1_multi_subsystems 00:42:34.220 ************************************ 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 bdev_null0 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 [2024-12-13 06:48:24.462686] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 bdev_null1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:34.220 { 00:42:34.220 "params": { 00:42:34.220 "name": "Nvme$subsystem", 00:42:34.220 "trtype": "$TEST_TRANSPORT", 00:42:34.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:34.220 "adrfam": "ipv4", 00:42:34.220 "trsvcid": "$NVMF_PORT", 00:42:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:34.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:34.220 "hdgst": ${hdgst:-false}, 00:42:34.220 "ddgst": ${ddgst:-false} 00:42:34.220 }, 00:42:34.220 "method": "bdev_nvme_attach_controller" 00:42:34.220 } 00:42:34.220 EOF 00:42:34.220 )") 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:34.220 { 00:42:34.220 "params": { 00:42:34.220 "name": "Nvme$subsystem", 00:42:34.220 "trtype": "$TEST_TRANSPORT", 00:42:34.220 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:34.220 "adrfam": "ipv4", 00:42:34.220 "trsvcid": "$NVMF_PORT", 00:42:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:34.220 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:34.220 "hdgst": ${hdgst:-false}, 00:42:34.220 "ddgst": ${ddgst:-false} 00:42:34.220 }, 00:42:34.220 "method": "bdev_nvme_attach_controller" 00:42:34.220 } 00:42:34.220 EOF 00:42:34.220 )") 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:34.220 "params": { 00:42:34.220 "name": "Nvme0", 00:42:34.220 "trtype": "tcp", 00:42:34.220 "traddr": "10.0.0.2", 00:42:34.220 "adrfam": "ipv4", 00:42:34.220 "trsvcid": "4420", 00:42:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:34.220 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:34.220 "hdgst": false, 00:42:34.220 "ddgst": false 00:42:34.220 }, 00:42:34.220 "method": "bdev_nvme_attach_controller" 00:42:34.220 },{ 00:42:34.220 "params": { 00:42:34.220 "name": "Nvme1", 00:42:34.220 "trtype": "tcp", 00:42:34.220 "traddr": "10.0.0.2", 00:42:34.220 "adrfam": "ipv4", 00:42:34.220 "trsvcid": "4420", 00:42:34.220 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:34.220 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:34.220 "hdgst": false, 00:42:34.220 "ddgst": false 00:42:34.220 }, 00:42:34.220 "method": "bdev_nvme_attach_controller" 00:42:34.220 }' 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:34.220 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:34.221 06:48:24 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:34.221 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:34.221 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:34.221 fio-3.35 00:42:34.221 Starting 2 threads 00:42:44.197 00:42:44.197 filename0: (groupid=0, jobs=1): err= 0: pid=1303321: Fri Dec 13 06:48:35 2024 00:42:44.197 read: IOPS=193, BW=774KiB/s (792kB/s)(7760KiB/10029msec) 00:42:44.197 slat (nsec): min=5947, max=32575, avg=7446.04, stdev=2348.92 00:42:44.197 clat (usec): min=396, max=42535, avg=20655.76, stdev=20332.59 00:42:44.197 lat (usec): min=402, max=42542, avg=20663.21, stdev=20332.08 00:42:44.197 clat percentiles (usec): 00:42:44.197 | 1.00th=[ 408], 5.00th=[ 416], 10.00th=[ 424], 20.00th=[ 449], 00:42:44.197 | 30.00th=[ 603], 40.00th=[ 611], 50.00th=[ 979], 60.00th=[40633], 00:42:44.197 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:44.197 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:44.197 | 99.99th=[42730] 00:42:44.197 bw ( KiB/s): min= 704, max= 832, per=50.17%, avg=774.40, stdev=26.67, samples=20 00:42:44.197 iops : min= 176, max= 208, avg=193.60, stdev= 6.67, samples=20 00:42:44.197 lat (usec) : 500=25.26%, 750=22.58%, 1000=2.42% 00:42:44.197 lat (msec) : 2=0.26%, 50=49.48% 00:42:44.197 cpu : usr=96.38%, sys=3.37%, ctx=10, majf=0, minf=148 00:42:44.197 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.197 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.197 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:44.197 filename1: (groupid=0, jobs=1): err= 0: pid=1303322: Fri Dec 13 06:48:35 2024 00:42:44.197 read: IOPS=192, BW=770KiB/s (788kB/s)(7728KiB/10039msec) 00:42:44.197 slat (nsec): min=5948, max=28318, avg=7386.20, stdev=2168.79 00:42:44.197 clat (usec): min=386, max=42596, avg=20762.30, stdev=20334.56 00:42:44.197 lat (usec): min=392, max=42603, avg=20769.69, stdev=20334.11 00:42:44.197 clat percentiles (usec): 00:42:44.197 | 1.00th=[ 404], 5.00th=[ 412], 10.00th=[ 420], 20.00th=[ 506], 00:42:44.198 | 30.00th=[ 611], 40.00th=[ 930], 50.00th=[ 1004], 60.00th=[41157], 00:42:44.198 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[41681], 00:42:44.198 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:42:44.198 | 99.99th=[42730] 00:42:44.198 bw ( KiB/s): min= 704, max= 832, per=49.97%, avg=771.20, stdev=32.67, samples=20 00:42:44.198 iops : min= 176, max= 208, avg=192.80, stdev= 8.17, samples=20 00:42:44.198 lat (usec) : 500=19.93%, 750=13.41%, 1000=16.41% 00:42:44.198 lat (msec) : 2=0.78%, 50=49.48% 00:42:44.198 cpu : usr=96.57%, sys=3.18%, ctx=10, majf=0, minf=47 00:42:44.198 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:44.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:44.198 issued rwts: total=1932,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:44.198 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:44.198 00:42:44.198 Run status group 0 (all jobs): 00:42:44.198 READ: bw=1543KiB/s (1580kB/s), 770KiB/s-774KiB/s (788kB/s-792kB/s), io=15.1MiB (15.9MB), run=10029-10039msec 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.198 00:42:44.198 real 0m11.380s 00:42:44.198 user 0m26.871s 00:42:44.198 sys 0m0.973s 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:44.198 ************************************ 00:42:44.198 END TEST fio_dif_1_multi_subsystems 00:42:44.198 ************************************ 00:42:44.198 06:48:35 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:42:44.198 06:48:35 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:44.198 06:48:35 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:44.198 06:48:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:44.457 ************************************ 00:42:44.457 START TEST fio_dif_rand_params 00:42:44.457 ************************************ 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.457 bdev_null0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:44.457 [2024-12-13 06:48:35.922950] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.457 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:44.458 { 00:42:44.458 "params": { 00:42:44.458 "name": "Nvme$subsystem", 00:42:44.458 "trtype": "$TEST_TRANSPORT", 00:42:44.458 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:44.458 "adrfam": "ipv4", 00:42:44.458 "trsvcid": "$NVMF_PORT", 00:42:44.458 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:44.458 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:44.458 "hdgst": ${hdgst:-false}, 00:42:44.458 "ddgst": ${ddgst:-false} 00:42:44.458 }, 00:42:44.458 "method": "bdev_nvme_attach_controller" 00:42:44.458 } 00:42:44.458 EOF 00:42:44.458 )") 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:44.458 "params": { 00:42:44.458 "name": "Nvme0", 00:42:44.458 "trtype": "tcp", 00:42:44.458 "traddr": "10.0.0.2", 00:42:44.458 "adrfam": "ipv4", 00:42:44.458 "trsvcid": "4420", 00:42:44.458 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:44.458 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:44.458 "hdgst": false, 00:42:44.458 "ddgst": false 00:42:44.458 }, 00:42:44.458 "method": "bdev_nvme_attach_controller" 00:42:44.458 }' 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:44.458 06:48:35 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:44.458 06:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:44.458 06:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:44.458 06:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:44.458 06:48:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:44.716 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:42:44.716 ... 00:42:44.716 fio-3.35 00:42:44.716 Starting 3 threads 00:42:51.273 00:42:51.273 filename0: (groupid=0, jobs=1): err= 0: pid=1305235: Fri Dec 13 06:48:41 2024 00:42:51.273 read: IOPS=310, BW=38.8MiB/s (40.7MB/s)(196MiB/5046msec) 00:42:51.273 slat (nsec): min=6241, max=51589, avg=11142.01, stdev=2231.82 00:42:51.273 clat (usec): min=4861, max=51527, avg=9612.74, stdev=5828.29 00:42:51.273 lat (usec): min=4873, max=51537, avg=9623.88, stdev=5828.34 00:42:51.273 clat percentiles (usec): 00:42:51.273 | 1.00th=[ 5800], 5.00th=[ 6390], 10.00th=[ 6915], 20.00th=[ 7832], 00:42:51.273 | 30.00th=[ 8225], 40.00th=[ 8586], 50.00th=[ 8848], 60.00th=[ 9241], 00:42:51.273 | 70.00th=[ 9634], 80.00th=[10028], 90.00th=[10552], 95.00th=[10945], 00:42:51.273 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[51643], 00:42:51.273 | 99.99th=[51643] 00:42:51.273 bw ( KiB/s): min=29184, max=44800, per=34.25%, avg=40081.80, stdev=4376.11, samples=10 00:42:51.273 iops : min= 228, max= 350, avg=313.10, stdev=34.20, samples=10 00:42:51.273 lat (msec) : 10=80.10%, 20=17.86%, 50=1.79%, 100=0.26% 00:42:51.273 cpu : usr=94.07%, sys=5.63%, ctx=16, majf=0, minf=53 00:42:51.273 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.273 filename0: (groupid=0, jobs=1): err= 0: pid=1305236: Fri Dec 13 06:48:41 2024 00:42:51.273 read: IOPS=322, BW=40.4MiB/s (42.3MB/s)(204MiB/5044msec) 00:42:51.273 slat (nsec): min=6248, max=27723, avg=10956.82, stdev=1985.41 00:42:51.273 clat (usec): min=3545, max=50726, avg=9250.97, stdev=4360.55 00:42:51.273 lat (usec): min=3552, max=50753, avg=9261.93, stdev=4360.81 00:42:51.273 clat percentiles (usec): 00:42:51.273 | 1.00th=[ 3949], 5.00th=[ 5997], 10.00th=[ 6390], 20.00th=[ 7701], 00:42:51.273 | 30.00th=[ 8291], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9372], 00:42:51.273 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10814], 95.00th=[11338], 00:42:51.273 | 99.00th=[44827], 99.50th=[49546], 99.90th=[50594], 99.95th=[50594], 00:42:51.273 | 99.99th=[50594] 00:42:51.273 bw ( KiB/s): min=37376, max=47872, per=35.59%, avg=41651.20, stdev=3478.94, samples=10 00:42:51.273 iops : min= 292, max= 374, avg=325.40, stdev=27.18, samples=10 00:42:51.273 lat (msec) : 4=1.17%, 10=74.89%, 20=22.90%, 50=0.74%, 100=0.31% 00:42:51.273 cpu : usr=94.09%, sys=5.61%, ctx=9, majf=0, minf=52 00:42:51.273 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 issued rwts: total=1629,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.273 filename0: (groupid=0, jobs=1): err= 0: pid=1305237: Fri Dec 13 06:48:41 2024 00:42:51.273 read: IOPS=280, BW=35.1MiB/s (36.8MB/s)(177MiB/5044msec) 00:42:51.273 slat (usec): min=6, max=133, avg=11.23, stdev= 3.72 00:42:51.273 clat (usec): min=3290, max=51180, avg=10635.94, stdev=5747.50 00:42:51.273 lat (usec): min=3297, max=51191, avg=10647.17, stdev=5748.00 00:42:51.273 clat percentiles (usec): 00:42:51.273 | 1.00th=[ 5604], 5.00th=[ 6521], 10.00th=[ 7242], 20.00th=[ 8586], 00:42:51.273 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10159], 60.00th=[10421], 00:42:51.273 | 70.00th=[10814], 80.00th=[11338], 90.00th=[11994], 95.00th=[12649], 00:42:51.273 | 99.00th=[49021], 99.50th=[50070], 99.90th=[51119], 99.95th=[51119], 00:42:51.273 | 99.99th=[51119] 00:42:51.273 bw ( KiB/s): min=23808, max=44288, per=30.95%, avg=36224.00, stdev=5418.83, samples=10 00:42:51.273 iops : min= 186, max= 346, avg=283.00, stdev=42.33, samples=10 00:42:51.273 lat (msec) : 4=0.64%, 10=44.95%, 20=52.36%, 50=1.48%, 100=0.56% 00:42:51.273 cpu : usr=94.43%, sys=5.29%, ctx=12, majf=0, minf=64 00:42:51.273 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:51.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:51.273 issued rwts: total=1417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:51.273 latency : target=0, window=0, percentile=100.00%, depth=3 00:42:51.273 00:42:51.273 Run status group 0 (all jobs): 00:42:51.273 READ: bw=114MiB/s (120MB/s), 35.1MiB/s-40.4MiB/s (36.8MB/s-42.3MB/s), io=577MiB (605MB), run=5044-5046msec 00:42:51.273 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:42:51.273 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 bdev_null0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 [2024-12-13 06:48:42.230559] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 bdev_null1 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 bdev_null2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:51.274 { 00:42:51.274 "params": { 00:42:51.274 "name": "Nvme$subsystem", 00:42:51.274 "trtype": "$TEST_TRANSPORT", 00:42:51.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:51.274 "adrfam": "ipv4", 00:42:51.274 "trsvcid": "$NVMF_PORT", 00:42:51.274 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:51.274 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:51.274 "hdgst": ${hdgst:-false}, 00:42:51.274 "ddgst": ${ddgst:-false} 00:42:51.274 }, 00:42:51.274 "method": "bdev_nvme_attach_controller" 00:42:51.274 } 00:42:51.274 EOF 00:42:51.274 )") 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:51.274 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:51.274 { 00:42:51.274 "params": { 00:42:51.274 "name": "Nvme$subsystem", 00:42:51.274 "trtype": "$TEST_TRANSPORT", 00:42:51.274 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:51.274 "adrfam": "ipv4", 00:42:51.275 "trsvcid": "$NVMF_PORT", 00:42:51.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:51.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:51.275 "hdgst": ${hdgst:-false}, 00:42:51.275 "ddgst": ${ddgst:-false} 00:42:51.275 }, 00:42:51.275 "method": "bdev_nvme_attach_controller" 00:42:51.275 } 00:42:51.275 EOF 00:42:51.275 )") 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:51.275 { 00:42:51.275 "params": { 00:42:51.275 "name": "Nvme$subsystem", 00:42:51.275 "trtype": "$TEST_TRANSPORT", 00:42:51.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:51.275 "adrfam": "ipv4", 00:42:51.275 "trsvcid": "$NVMF_PORT", 00:42:51.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:51.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:51.275 "hdgst": ${hdgst:-false}, 00:42:51.275 "ddgst": ${ddgst:-false} 00:42:51.275 }, 00:42:51.275 "method": "bdev_nvme_attach_controller" 00:42:51.275 } 00:42:51.275 EOF 00:42:51.275 )") 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:51.275 "params": { 00:42:51.275 "name": "Nvme0", 00:42:51.275 "trtype": "tcp", 00:42:51.275 "traddr": "10.0.0.2", 00:42:51.275 "adrfam": "ipv4", 00:42:51.275 "trsvcid": "4420", 00:42:51.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:51.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:51.275 "hdgst": false, 00:42:51.275 "ddgst": false 00:42:51.275 }, 00:42:51.275 "method": "bdev_nvme_attach_controller" 00:42:51.275 },{ 00:42:51.275 "params": { 00:42:51.275 "name": "Nvme1", 00:42:51.275 "trtype": "tcp", 00:42:51.275 "traddr": "10.0.0.2", 00:42:51.275 "adrfam": "ipv4", 00:42:51.275 "trsvcid": "4420", 00:42:51.275 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:51.275 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:51.275 "hdgst": false, 00:42:51.275 "ddgst": false 00:42:51.275 }, 00:42:51.275 "method": "bdev_nvme_attach_controller" 00:42:51.275 },{ 00:42:51.275 "params": { 00:42:51.275 "name": "Nvme2", 00:42:51.275 "trtype": "tcp", 00:42:51.275 "traddr": "10.0.0.2", 00:42:51.275 "adrfam": "ipv4", 00:42:51.275 "trsvcid": "4420", 00:42:51.275 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:42:51.275 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:42:51.275 "hdgst": false, 00:42:51.275 "ddgst": false 00:42:51.275 }, 00:42:51.275 "method": "bdev_nvme_attach_controller" 00:42:51.275 }' 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:51.275 06:48:42 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:51.275 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:51.275 ... 00:42:51.275 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:51.275 ... 00:42:51.275 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:42:51.275 ... 00:42:51.275 fio-3.35 00:42:51.275 Starting 24 threads 00:43:03.468 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306300: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.468 slat (nsec): min=7293, max=74452, avg=20474.31, stdev=11280.26 00:43:03.468 clat (usec): min=11171, max=32190, avg=29681.55, stdev=1847.13 00:43:03.468 lat (usec): min=11193, max=32207, avg=29702.03, stdev=1847.05 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[18220], 5.00th=[28443], 10.00th=[28443], 20.00th=[28705], 00:43:03.468 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:03.468 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:43:03.468 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:43:03.468 | 99.99th=[32113] 00:43:03.468 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.468 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.468 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.468 cpu : usr=98.40%, sys=1.20%, ctx=15, majf=0, minf=9 00:43:03.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306301: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10007msec) 00:43:03.468 slat (usec): min=6, max=138, avg=46.01, stdev=21.99 00:43:03.468 clat (usec): min=18035, max=49531, avg=29597.91, stdev=1522.61 00:43:03.468 lat (usec): min=18049, max=49545, avg=29643.92, stdev=1522.98 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:43:03.468 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.468 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.468 | 99.00th=[31065], 99.50th=[31851], 99.90th=[49546], 99.95th=[49546], 00:43:03.468 | 99.99th=[49546] 00:43:03.468 bw ( KiB/s): min= 1923, max= 2304, per=4.13%, avg=2121.47, stdev=97.23, samples=19 00:43:03.468 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.468 lat (msec) : 20=0.30%, 50=99.70% 00:43:03.468 cpu : usr=98.67%, sys=0.92%, ctx=36, majf=0, minf=9 00:43:03.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306302: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=572, BW=2292KiB/s (2347kB/s)(22.4MiB/10007msec) 00:43:03.468 slat (nsec): min=6914, max=94946, avg=19390.01, stdev=17782.04 00:43:03.468 clat (usec): min=9396, max=79592, avg=27845.05, stdev=6517.65 00:43:03.468 lat (usec): min=9426, max=79612, avg=27864.44, stdev=6519.97 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[10159], 5.00th=[16319], 10.00th=[16581], 20.00th=[23200], 00:43:03.468 | 30.00th=[24773], 40.00th=[28705], 50.00th=[29754], 60.00th=[30278], 00:43:03.468 | 70.00th=[30278], 80.00th=[30540], 90.00th=[34341], 95.00th=[36963], 00:43:03.468 | 99.00th=[44827], 99.50th=[45351], 99.90th=[67634], 99.95th=[79168], 00:43:03.468 | 99.99th=[79168] 00:43:03.468 bw ( KiB/s): min= 1811, max= 2538, per=4.44%, avg=2281.37, stdev=185.29, samples=19 00:43:03.468 iops : min= 452, max= 634, avg=570.21, stdev=46.31, samples=19 00:43:03.468 lat (msec) : 10=0.91%, 20=9.35%, 50=89.36%, 100=0.38% 00:43:03.468 cpu : usr=98.37%, sys=1.23%, ctx=14, majf=0, minf=9 00:43:03.468 IO depths : 1=0.1%, 2=0.1%, 4=3.1%, 8=81.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 complete : 0=0.0%, 4=89.0%, 8=8.6%, 16=2.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306304: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=532, BW=2131KiB/s (2182kB/s)(20.8MiB/10003msec) 00:43:03.468 slat (usec): min=6, max=105, avg=28.29, stdev=18.46 00:43:03.468 clat (usec): min=26852, max=36462, avg=29828.65, stdev=969.25 00:43:03.468 lat (usec): min=26865, max=36480, avg=29856.94, stdev=966.65 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28443], 20.00th=[28705], 00:43:03.468 | 30.00th=[29492], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.468 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:03.468 | 99.00th=[31327], 99.50th=[32375], 99.90th=[36439], 99.95th=[36439], 00:43:03.468 | 99.99th=[36439] 00:43:03.468 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2121.53, stdev=76.93, samples=19 00:43:03.468 iops : min= 512, max= 576, avg=530.26, stdev=19.15, samples=19 00:43:03.468 lat (msec) : 50=100.00% 00:43:03.468 cpu : usr=98.26%, sys=1.18%, ctx=78, majf=0, minf=9 00:43:03.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306305: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10018msec) 00:43:03.468 slat (usec): min=6, max=117, avg=50.45, stdev=24.98 00:43:03.468 clat (usec): min=17703, max=32718, avg=29572.40, stdev=1191.16 00:43:03.468 lat (usec): min=17722, max=32736, avg=29622.84, stdev=1186.54 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.468 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:43:03.468 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:43:03.468 | 99.00th=[31327], 99.50th=[32113], 99.90th=[32637], 99.95th=[32637], 00:43:03.468 | 99.99th=[32637] 00:43:03.468 bw ( KiB/s): min= 2048, max= 2299, per=4.14%, avg=2128.05, stdev=86.30, samples=19 00:43:03.468 iops : min= 512, max= 574, avg=531.89, stdev=21.39, samples=19 00:43:03.468 lat (msec) : 20=0.30%, 50=99.70% 00:43:03.468 cpu : usr=98.64%, sys=0.96%, ctx=14, majf=0, minf=9 00:43:03.468 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.468 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.468 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.468 filename0: (groupid=0, jobs=1): err= 0: pid=1306306: Fri Dec 13 06:48:53 2024 00:43:03.468 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10010msec) 00:43:03.468 slat (usec): min=4, max=137, avg=46.83, stdev=20.97 00:43:03.468 clat (usec): min=18488, max=52499, avg=29638.73, stdev=1654.58 00:43:03.468 lat (usec): min=18507, max=52512, avg=29685.56, stdev=1653.99 00:43:03.468 clat percentiles (usec): 00:43:03.468 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:43:03.468 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.468 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.468 | 99.00th=[31327], 99.50th=[32113], 99.90th=[52691], 99.95th=[52691], 00:43:03.468 | 99.99th=[52691] 00:43:03.468 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2121.32, stdev=97.57, samples=19 00:43:03.468 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.468 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:43:03.468 cpu : usr=98.60%, sys=0.97%, ctx=37, majf=0, minf=9 00:43:03.468 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.468 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename0: (groupid=0, jobs=1): err= 0: pid=1306307: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=534, BW=2137KiB/s (2188kB/s)(20.9MiB/10016msec) 00:43:03.469 slat (usec): min=7, max=111, avg=35.34, stdev=25.66 00:43:03.469 clat (usec): min=18031, max=50508, avg=29666.72, stdev=1648.93 00:43:03.469 lat (usec): min=18048, max=50536, avg=29702.06, stdev=1640.00 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[25822], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.469 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.469 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:03.469 | 99.00th=[31327], 99.50th=[32113], 99.90th=[50594], 99.95th=[50594], 00:43:03.469 | 99.99th=[50594] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2133.35, stdev=83.94, samples=20 00:43:03.469 iops : min= 512, max= 576, avg=533.30, stdev=20.97, samples=20 00:43:03.469 lat (msec) : 20=0.43%, 50=99.42%, 100=0.15% 00:43:03.469 cpu : usr=98.66%, sys=0.94%, ctx=13, majf=0, minf=9 00:43:03.469 IO depths : 1=6.1%, 2=12.3%, 4=24.7%, 8=50.6%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5350,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename0: (groupid=0, jobs=1): err= 0: pid=1306308: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.469 slat (nsec): min=8233, max=96480, avg=34395.40, stdev=18437.03 00:43:03.469 clat (usec): min=10291, max=43929, avg=29507.19, stdev=1912.36 00:43:03.469 lat (usec): min=10308, max=43943, avg=29541.59, stdev=1910.62 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.469 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.469 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.469 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:43:03.469 | 99.99th=[43779] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.469 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.469 lat (msec) : 20=1.23%, 50=98.77% 00:43:03.469 cpu : usr=98.64%, sys=0.96%, ctx=17, majf=0, minf=9 00:43:03.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306309: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=532, BW=2128KiB/s (2180kB/s)(20.8MiB/10013msec) 00:43:03.469 slat (usec): min=4, max=119, avg=49.73, stdev=25.69 00:43:03.469 clat (usec): min=17896, max=54904, avg=29559.08, stdev=1783.21 00:43:03.469 lat (usec): min=17914, max=54919, avg=29608.81, stdev=1781.70 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.469 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.469 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.469 | 99.00th=[31065], 99.50th=[31851], 99.90th=[54789], 99.95th=[54789], 00:43:03.469 | 99.99th=[54789] 00:43:03.469 bw ( KiB/s): min= 1920, max= 2299, per=4.13%, avg=2121.32, stdev=97.20, samples=19 00:43:03.469 iops : min= 480, max= 574, avg=530.21, stdev=24.13, samples=19 00:43:03.469 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:43:03.469 cpu : usr=98.63%, sys=0.98%, ctx=13, majf=0, minf=9 00:43:03.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306310: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=533, BW=2134KiB/s (2185kB/s)(20.9MiB/10018msec) 00:43:03.469 slat (usec): min=6, max=121, avg=51.05, stdev=23.93 00:43:03.469 clat (usec): min=17483, max=32781, avg=29570.49, stdev=1183.44 00:43:03.469 lat (usec): min=17506, max=32799, avg=29621.54, stdev=1177.79 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.469 | 30.00th=[29230], 40.00th=[29754], 50.00th=[29754], 60.00th=[30016], 00:43:03.469 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:43:03.469 | 99.00th=[31327], 99.50th=[31851], 99.90th=[32637], 99.95th=[32637], 00:43:03.469 | 99.99th=[32900] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2299, per=4.14%, avg=2128.05, stdev=86.30, samples=19 00:43:03.469 iops : min= 512, max= 574, avg=531.89, stdev=21.39, samples=19 00:43:03.469 lat (msec) : 20=0.30%, 50=99.70% 00:43:03.469 cpu : usr=98.65%, sys=0.94%, ctx=13, majf=0, minf=9 00:43:03.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306311: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=533, BW=2133KiB/s (2184kB/s)(20.9MiB/10021msec) 00:43:03.469 slat (nsec): min=6244, max=24074, avg=9570.13, stdev=2276.53 00:43:03.469 clat (usec): min=16126, max=42990, avg=29918.20, stdev=1604.65 00:43:03.469 lat (usec): min=16138, max=43003, avg=29927.77, stdev=1604.70 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[26870], 5.00th=[28443], 10.00th=[28705], 20.00th=[28705], 00:43:03.469 | 30.00th=[29230], 40.00th=[30278], 50.00th=[30278], 60.00th=[30278], 00:43:03.469 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:43:03.469 | 99.00th=[31851], 99.50th=[41157], 99.90th=[42730], 99.95th=[42730], 00:43:03.469 | 99.99th=[42730] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2299, per=4.15%, avg=2130.70, stdev=83.69, samples=20 00:43:03.469 iops : min= 512, max= 574, avg=532.60, stdev=20.77, samples=20 00:43:03.469 lat (msec) : 20=0.65%, 50=99.35% 00:43:03.469 cpu : usr=98.76%, sys=0.84%, ctx=13, majf=0, minf=9 00:43:03.469 IO depths : 1=4.6%, 2=10.8%, 4=24.9%, 8=51.8%, 16=7.9%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306312: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.469 slat (usec): min=7, max=101, avg=34.11, stdev=18.33 00:43:03.469 clat (usec): min=11596, max=32138, avg=29507.19, stdev=1874.56 00:43:03.469 lat (usec): min=11631, max=32175, avg=29541.30, stdev=1872.66 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.469 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.469 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.469 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:43:03.469 | 99.99th=[32113] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.469 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.469 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.469 cpu : usr=98.55%, sys=1.04%, ctx=13, majf=0, minf=9 00:43:03.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306314: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:43:03.469 slat (nsec): min=5580, max=87441, avg=40541.79, stdev=15921.72 00:43:03.469 clat (usec): min=18015, max=50092, avg=29694.66, stdev=1572.44 00:43:03.469 lat (usec): min=18043, max=50106, avg=29735.20, stdev=1569.12 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[27657], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:43:03.469 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30016], 60.00th=[30016], 00:43:03.469 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:43:03.469 | 99.00th=[31327], 99.50th=[32113], 99.90th=[50070], 99.95th=[50070], 00:43:03.469 | 99.99th=[50070] 00:43:03.469 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2121.32, stdev=97.57, samples=19 00:43:03.469 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.469 lat (msec) : 20=0.30%, 50=99.55%, 100=0.15% 00:43:03.469 cpu : usr=98.58%, sys=0.94%, ctx=82, majf=0, minf=9 00:43:03.469 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.469 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.469 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.469 filename1: (groupid=0, jobs=1): err= 0: pid=1306315: Fri Dec 13 06:48:53 2024 00:43:03.469 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.469 slat (usec): min=7, max=102, avg=35.03, stdev=18.40 00:43:03.469 clat (usec): min=11435, max=32086, avg=29499.65, stdev=1877.62 00:43:03.469 lat (usec): min=11458, max=32113, avg=29534.68, stdev=1875.59 00:43:03.469 clat percentiles (usec): 00:43:03.469 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.469 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.469 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.469 | 99.00th=[31065], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:43:03.469 | 99.99th=[32113] 00:43:03.469 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.469 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.469 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.469 cpu : usr=98.57%, sys=1.03%, ctx=10, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename1: (groupid=0, jobs=1): err= 0: pid=1306316: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10009msec) 00:43:03.470 slat (usec): min=4, max=116, avg=50.56, stdev=25.05 00:43:03.470 clat (usec): min=17857, max=50684, avg=29549.09, stdev=1609.15 00:43:03.470 lat (usec): min=17872, max=50697, avg=29599.65, stdev=1607.58 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.470 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.470 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31065], 99.50th=[31589], 99.90th=[50594], 99.95th=[50594], 00:43:03.470 | 99.99th=[50594] 00:43:03.470 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2121.32, stdev=97.57, samples=19 00:43:03.470 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.470 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:43:03.470 cpu : usr=98.75%, sys=0.84%, ctx=14, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename1: (groupid=0, jobs=1): err= 0: pid=1306317: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.470 slat (usec): min=8, max=103, avg=31.94, stdev=20.62 00:43:03.470 clat (usec): min=10243, max=45340, avg=29563.72, stdev=1986.20 00:43:03.470 lat (usec): min=10263, max=45382, avg=29595.66, stdev=1981.44 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.470 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30278], 60.00th=[30278], 00:43:03.470 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[30802], 00:43:03.470 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[42206], 00:43:03.470 | 99.99th=[45351] 00:43:03.470 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.470 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.470 lat (msec) : 20=1.26%, 50=98.74% 00:43:03.470 cpu : usr=98.51%, sys=1.09%, ctx=15, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306318: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.470 slat (nsec): min=7905, max=93681, avg=33441.26, stdev=18667.78 00:43:03.470 clat (usec): min=12090, max=32145, avg=29511.79, stdev=1856.78 00:43:03.470 lat (usec): min=12100, max=32178, avg=29545.23, stdev=1855.22 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[17957], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.470 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.470 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31065], 99.50th=[31327], 99.90th=[32113], 99.95th=[32113], 00:43:03.470 | 99.99th=[32113] 00:43:03.470 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.470 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.470 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.470 cpu : usr=98.66%, sys=0.93%, ctx=13, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306319: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:43:03.470 slat (usec): min=4, max=119, avg=51.80, stdev=24.51 00:43:03.470 clat (usec): min=17758, max=50344, avg=29584.28, stdev=1616.31 00:43:03.470 lat (usec): min=17787, max=50357, avg=29636.08, stdev=1613.06 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.470 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.470 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31327], 99.50th=[31851], 99.90th=[50070], 99.95th=[50594], 00:43:03.470 | 99.99th=[50594] 00:43:03.470 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2121.32, stdev=97.57, samples=19 00:43:03.470 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.470 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:43:03.470 cpu : usr=98.65%, sys=0.94%, ctx=16, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306320: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.470 slat (nsec): min=8497, max=94496, avg=34488.20, stdev=19409.66 00:43:03.470 clat (usec): min=11231, max=32242, avg=29535.17, stdev=1891.93 00:43:03.470 lat (usec): min=11253, max=32257, avg=29569.66, stdev=1888.07 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[17695], 5.00th=[27919], 10.00th=[28181], 20.00th=[28443], 00:43:03.470 | 30.00th=[28967], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.470 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31065], 99.50th=[31589], 99.90th=[32113], 99.95th=[32113], 00:43:03.470 | 99.99th=[32113] 00:43:03.470 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.470 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.470 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.470 cpu : usr=98.51%, sys=1.07%, ctx=22, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306321: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=536, BW=2145KiB/s (2196kB/s)(21.0MiB/10027msec) 00:43:03.470 slat (nsec): min=7563, max=93716, avg=29420.09, stdev=12801.98 00:43:03.470 clat (usec): min=11508, max=32075, avg=29600.16, stdev=1851.63 00:43:03.470 lat (usec): min=11541, max=32097, avg=29629.58, stdev=1850.40 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[17695], 5.00th=[28181], 10.00th=[28443], 20.00th=[28705], 00:43:03.470 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.470 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31065], 99.50th=[31589], 99.90th=[31851], 99.95th=[32113], 00:43:03.470 | 99.99th=[32113] 00:43:03.470 bw ( KiB/s): min= 2048, max= 2304, per=4.17%, avg=2143.25, stdev=91.07, samples=20 00:43:03.470 iops : min= 512, max= 576, avg=535.70, stdev=22.68, samples=20 00:43:03.470 lat (msec) : 20=1.19%, 50=98.81% 00:43:03.470 cpu : usr=98.32%, sys=1.18%, ctx=75, majf=0, minf=9 00:43:03.470 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306322: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:43:03.470 slat (usec): min=4, max=116, avg=49.23, stdev=25.60 00:43:03.470 clat (usec): min=17845, max=54430, avg=29556.25, stdev=1634.89 00:43:03.470 lat (usec): min=17860, max=54444, avg=29605.49, stdev=1633.54 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.470 | 30.00th=[29230], 40.00th=[29492], 50.00th=[29754], 60.00th=[30016], 00:43:03.470 | 70.00th=[30016], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.470 | 99.00th=[31327], 99.50th=[32375], 99.90th=[50070], 99.95th=[50070], 00:43:03.470 | 99.99th=[54264] 00:43:03.470 bw ( KiB/s): min= 1920, max= 2304, per=4.13%, avg=2121.32, stdev=97.57, samples=19 00:43:03.470 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.470 lat (msec) : 20=0.30%, 50=99.40%, 100=0.30% 00:43:03.470 cpu : usr=98.62%, sys=0.99%, ctx=15, majf=0, minf=9 00:43:03.470 IO depths : 1=6.0%, 2=12.2%, 4=24.8%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:03.470 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.470 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.470 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.470 filename2: (groupid=0, jobs=1): err= 0: pid=1306323: Fri Dec 13 06:48:53 2024 00:43:03.470 read: IOPS=532, BW=2130KiB/s (2181kB/s)(20.8MiB/10005msec) 00:43:03.470 slat (usec): min=6, max=115, avg=39.03, stdev=25.84 00:43:03.470 clat (usec): min=25496, max=39849, avg=29750.02, stdev=1104.31 00:43:03.470 lat (usec): min=25516, max=39866, avg=29789.04, stdev=1089.86 00:43:03.470 clat percentiles (usec): 00:43:03.470 | 1.00th=[27395], 5.00th=[27919], 10.00th=[28181], 20.00th=[28705], 00:43:03.470 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.470 | 70.00th=[30278], 80.00th=[30540], 90.00th=[30802], 95.00th=[31065], 00:43:03.470 | 99.00th=[31327], 99.50th=[32375], 99.90th=[38011], 99.95th=[38011], 00:43:03.470 | 99.99th=[40109] 00:43:03.470 bw ( KiB/s): min= 2048, max= 2304, per=4.13%, avg=2121.32, stdev=77.14, samples=19 00:43:03.470 iops : min= 512, max= 576, avg=530.21, stdev=19.21, samples=19 00:43:03.470 lat (msec) : 50=100.00% 00:43:03.471 cpu : usr=98.67%, sys=0.93%, ctx=13, majf=0, minf=9 00:43:03.471 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.471 filename2: (groupid=0, jobs=1): err= 0: pid=1306324: Fri Dec 13 06:48:53 2024 00:43:03.471 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10007msec) 00:43:03.471 slat (nsec): min=7121, max=93947, avg=32539.18, stdev=15132.05 00:43:03.471 clat (usec): min=14217, max=79508, avg=29764.73, stdev=2672.53 00:43:03.471 lat (usec): min=14279, max=79528, avg=29797.27, stdev=2670.51 00:43:03.471 clat percentiles (usec): 00:43:03.471 | 1.00th=[27919], 5.00th=[28181], 10.00th=[28443], 20.00th=[28705], 00:43:03.471 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.471 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30540], 95.00th=[30802], 00:43:03.471 | 99.00th=[31327], 99.50th=[31851], 99.90th=[67634], 99.95th=[79168], 00:43:03.471 | 99.99th=[79168] 00:43:03.471 bw ( KiB/s): min= 1907, max= 2304, per=4.13%, avg=2120.63, stdev=99.09, samples=19 00:43:03.471 iops : min= 476, max= 576, avg=530.00, stdev=24.75, samples=19 00:43:03.471 lat (msec) : 20=0.64%, 50=99.06%, 100=0.30% 00:43:03.471 cpu : usr=98.63%, sys=0.92%, ctx=27, majf=0, minf=9 00:43:03.471 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 issued rwts: total=5326,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.471 filename2: (groupid=0, jobs=1): err= 0: pid=1306325: Fri Dec 13 06:48:53 2024 00:43:03.471 read: IOPS=532, BW=2129KiB/s (2181kB/s)(20.8MiB/10008msec) 00:43:03.471 slat (usec): min=8, max=120, avg=41.27, stdev=25.77 00:43:03.471 clat (usec): min=17793, max=53568, avg=29717.18, stdev=1634.29 00:43:03.471 lat (usec): min=17828, max=53587, avg=29758.45, stdev=1624.40 00:43:03.471 clat percentiles (usec): 00:43:03.471 | 1.00th=[27395], 5.00th=[27657], 10.00th=[27919], 20.00th=[28443], 00:43:03.471 | 30.00th=[29230], 40.00th=[30016], 50.00th=[30016], 60.00th=[30278], 00:43:03.471 | 70.00th=[30278], 80.00th=[30278], 90.00th=[30802], 95.00th=[31065], 00:43:03.471 | 99.00th=[31327], 99.50th=[32375], 99.90th=[49546], 99.95th=[49546], 00:43:03.471 | 99.99th=[53740] 00:43:03.471 bw ( KiB/s): min= 1923, max= 2304, per=4.13%, avg=2121.47, stdev=97.23, samples=19 00:43:03.471 iops : min= 480, max= 576, avg=530.21, stdev=24.28, samples=19 00:43:03.471 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:43:03.471 cpu : usr=98.82%, sys=0.78%, ctx=13, majf=0, minf=9 00:43:03.471 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:03.471 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:03.471 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:03.471 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:03.471 00:43:03.471 Run status group 0 (all jobs): 00:43:03.471 READ: bw=50.1MiB/s (52.6MB/s), 2128KiB/s-2292KiB/s (2180kB/s-2347kB/s), io=503MiB (527MB), run=10003-10027msec 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 bdev_null0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 [2024-12-13 06:48:53.913767] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 bdev_null1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:03.471 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:03.472 { 00:43:03.472 "params": { 00:43:03.472 "name": "Nvme$subsystem", 00:43:03.472 "trtype": "$TEST_TRANSPORT", 00:43:03.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.472 "adrfam": "ipv4", 00:43:03.472 "trsvcid": "$NVMF_PORT", 00:43:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.472 "hdgst": ${hdgst:-false}, 00:43:03.472 "ddgst": ${ddgst:-false} 00:43:03.472 }, 00:43:03.472 "method": "bdev_nvme_attach_controller" 00:43:03.472 } 00:43:03.472 EOF 00:43:03.472 )") 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:03.472 { 00:43:03.472 "params": { 00:43:03.472 "name": "Nvme$subsystem", 00:43:03.472 "trtype": "$TEST_TRANSPORT", 00:43:03.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:03.472 "adrfam": "ipv4", 00:43:03.472 "trsvcid": "$NVMF_PORT", 00:43:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:03.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:03.472 "hdgst": ${hdgst:-false}, 00:43:03.472 "ddgst": ${ddgst:-false} 00:43:03.472 }, 00:43:03.472 "method": "bdev_nvme_attach_controller" 00:43:03.472 } 00:43:03.472 EOF 00:43:03.472 )") 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:03.472 "params": { 00:43:03.472 "name": "Nvme0", 00:43:03.472 "trtype": "tcp", 00:43:03.472 "traddr": "10.0.0.2", 00:43:03.472 "adrfam": "ipv4", 00:43:03.472 "trsvcid": "4420", 00:43:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:03.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:03.472 "hdgst": false, 00:43:03.472 "ddgst": false 00:43:03.472 }, 00:43:03.472 "method": "bdev_nvme_attach_controller" 00:43:03.472 },{ 00:43:03.472 "params": { 00:43:03.472 "name": "Nvme1", 00:43:03.472 "trtype": "tcp", 00:43:03.472 "traddr": "10.0.0.2", 00:43:03.472 "adrfam": "ipv4", 00:43:03.472 "trsvcid": "4420", 00:43:03.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:03.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:03.472 "hdgst": false, 00:43:03.472 "ddgst": false 00:43:03.472 }, 00:43:03.472 "method": "bdev_nvme_attach_controller" 00:43:03.472 }' 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.472 06:48:53 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:03.472 06:48:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:03.472 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:03.472 ... 00:43:03.472 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:03.472 ... 00:43:03.472 fio-3.35 00:43:03.472 Starting 4 threads 00:43:08.728 00:43:08.728 filename0: (groupid=0, jobs=1): err= 0: pid=1308255: Fri Dec 13 06:49:00 2024 00:43:08.728 read: IOPS=2862, BW=22.4MiB/s (23.4MB/s)(112MiB/5003msec) 00:43:08.728 slat (nsec): min=6155, max=36589, avg=8911.51, stdev=2935.91 00:43:08.728 clat (usec): min=795, max=5149, avg=2767.94, stdev=378.37 00:43:08.728 lat (usec): min=815, max=5161, avg=2776.85, stdev=378.19 00:43:08.728 clat percentiles (usec): 00:43:08.728 | 1.00th=[ 1647], 5.00th=[ 2212], 10.00th=[ 2311], 20.00th=[ 2474], 00:43:08.728 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2900], 60.00th=[ 2966], 00:43:08.728 | 70.00th=[ 2966], 80.00th=[ 2999], 90.00th=[ 3064], 95.00th=[ 3261], 00:43:08.728 | 99.00th=[ 3818], 99.50th=[ 4015], 99.90th=[ 4424], 99.95th=[ 4490], 00:43:08.728 | 99.99th=[ 5145] 00:43:08.728 bw ( KiB/s): min=21536, max=24736, per=27.00%, avg=22905.60, stdev=996.13, samples=10 00:43:08.728 iops : min= 2692, max= 3092, avg=2863.20, stdev=124.52, samples=10 00:43:08.728 lat (usec) : 1000=0.13% 00:43:08.728 lat (msec) : 2=2.22%, 4=97.09%, 10=0.56% 00:43:08.728 cpu : usr=95.72%, sys=3.94%, ctx=9, majf=0, minf=0 00:43:08.728 IO depths : 1=0.4%, 2=6.6%, 4=64.2%, 8=28.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 issued rwts: total=14319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.728 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:08.728 filename0: (groupid=0, jobs=1): err= 0: pid=1308256: Fri Dec 13 06:49:00 2024 00:43:08.728 read: IOPS=2551, BW=19.9MiB/s (20.9MB/s)(99.7MiB/5003msec) 00:43:08.728 slat (nsec): min=6174, max=42453, avg=8573.43, stdev=2947.30 00:43:08.728 clat (usec): min=1157, max=7964, avg=3110.01, stdev=424.57 00:43:08.728 lat (usec): min=1168, max=7975, avg=3118.58, stdev=424.42 00:43:08.728 clat percentiles (usec): 00:43:08.728 | 1.00th=[ 2212], 5.00th=[ 2573], 10.00th=[ 2769], 20.00th=[ 2933], 00:43:08.728 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2999], 60.00th=[ 3032], 00:43:08.728 | 70.00th=[ 3130], 80.00th=[ 3294], 90.00th=[ 3621], 95.00th=[ 3884], 00:43:08.728 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5407], 99.95th=[ 6325], 00:43:08.728 | 99.99th=[ 7963] 00:43:08.728 bw ( KiB/s): min=19616, max=21328, per=24.07%, avg=20418.30, stdev=670.19, samples=10 00:43:08.728 iops : min= 2452, max= 2666, avg=2552.20, stdev=83.85, samples=10 00:43:08.728 lat (msec) : 2=0.36%, 4=95.57%, 10=4.07% 00:43:08.728 cpu : usr=96.30%, sys=3.38%, ctx=6, majf=0, minf=9 00:43:08.728 IO depths : 1=0.1%, 2=1.1%, 4=72.0%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 issued rwts: total=12767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.728 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:08.728 filename1: (groupid=0, jobs=1): err= 0: pid=1308257: Fri Dec 13 06:49:00 2024 00:43:08.728 read: IOPS=2604, BW=20.3MiB/s (21.3MB/s)(102MiB/5005msec) 00:43:08.728 slat (nsec): min=6160, max=49901, avg=8584.68, stdev=2921.96 00:43:08.728 clat (usec): min=905, max=8006, avg=3046.72, stdev=407.57 00:43:08.728 lat (usec): min=915, max=8016, avg=3055.31, stdev=407.45 00:43:08.728 clat percentiles (usec): 00:43:08.728 | 1.00th=[ 2147], 5.00th=[ 2474], 10.00th=[ 2638], 20.00th=[ 2868], 00:43:08.728 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:08.728 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3752], 00:43:08.728 | 99.00th=[ 4490], 99.50th=[ 4817], 99.90th=[ 5342], 99.95th=[ 5735], 00:43:08.728 | 99.99th=[ 8029] 00:43:08.728 bw ( KiB/s): min=20224, max=21344, per=24.57%, avg=20842.10, stdev=377.61, samples=10 00:43:08.728 iops : min= 2528, max= 2668, avg=2605.20, stdev=47.16, samples=10 00:43:08.728 lat (usec) : 1000=0.01% 00:43:08.728 lat (msec) : 2=0.40%, 4=96.59%, 10=3.01% 00:43:08.728 cpu : usr=95.48%, sys=4.20%, ctx=6, majf=0, minf=0 00:43:08.728 IO depths : 1=0.1%, 2=1.2%, 4=70.5%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 issued rwts: total=13037,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.728 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:08.728 filename1: (groupid=0, jobs=1): err= 0: pid=1308258: Fri Dec 13 06:49:00 2024 00:43:08.728 read: IOPS=2587, BW=20.2MiB/s (21.2MB/s)(101MiB/5004msec) 00:43:08.728 slat (nsec): min=6175, max=56202, avg=8883.30, stdev=3116.15 00:43:08.728 clat (usec): min=841, max=8716, avg=3064.83, stdev=434.45 00:43:08.728 lat (usec): min=852, max=8722, avg=3073.72, stdev=434.25 00:43:08.728 clat percentiles (usec): 00:43:08.728 | 1.00th=[ 2180], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2900], 00:43:08.728 | 30.00th=[ 2966], 40.00th=[ 2966], 50.00th=[ 2966], 60.00th=[ 2999], 00:43:08.728 | 70.00th=[ 3064], 80.00th=[ 3261], 90.00th=[ 3556], 95.00th=[ 3851], 00:43:08.728 | 99.00th=[ 4686], 99.50th=[ 5014], 99.90th=[ 5342], 99.95th=[ 6390], 00:43:08.728 | 99.99th=[ 8717] 00:43:08.728 bw ( KiB/s): min=19920, max=21712, per=24.42%, avg=20712.00, stdev=542.81, samples=10 00:43:08.728 iops : min= 2490, max= 2714, avg=2589.00, stdev=67.85, samples=10 00:43:08.728 lat (usec) : 1000=0.03% 00:43:08.728 lat (msec) : 2=0.36%, 4=95.71%, 10=3.90% 00:43:08.728 cpu : usr=95.98%, sys=3.68%, ctx=7, majf=0, minf=9 00:43:08.728 IO depths : 1=0.1%, 2=2.1%, 4=70.4%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:08.728 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:08.728 issued rwts: total=12950,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:08.728 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:08.728 00:43:08.728 Run status group 0 (all jobs): 00:43:08.728 READ: bw=82.8MiB/s (86.9MB/s), 19.9MiB/s-22.4MiB/s (20.9MB/s-23.4MB/s), io=415MiB (435MB), run=5003-5005msec 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:08.728 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.729 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.987 00:43:08.987 real 0m24.503s 00:43:08.987 user 4m52.581s 00:43:08.987 sys 0m5.061s 00:43:08.987 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 ************************************ 00:43:08.987 END TEST fio_dif_rand_params 00:43:08.987 ************************************ 00:43:08.987 06:49:00 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:08.987 06:49:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:08.987 06:49:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 ************************************ 00:43:08.987 START TEST fio_dif_digest 00:43:08.987 ************************************ 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 bdev_null0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:08.987 [2024-12-13 06:49:00.497097] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:08.987 { 00:43:08.987 "params": { 00:43:08.987 "name": "Nvme$subsystem", 00:43:08.987 "trtype": "$TEST_TRANSPORT", 00:43:08.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:08.987 "adrfam": "ipv4", 00:43:08.987 "trsvcid": "$NVMF_PORT", 00:43:08.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:08.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:08.987 "hdgst": ${hdgst:-false}, 00:43:08.987 "ddgst": ${ddgst:-false} 00:43:08.987 }, 00:43:08.987 "method": "bdev_nvme_attach_controller" 00:43:08.987 } 00:43:08.987 EOF 00:43:08.987 )") 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:08.987 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:08.988 "params": { 00:43:08.988 "name": "Nvme0", 00:43:08.988 "trtype": "tcp", 00:43:08.988 "traddr": "10.0.0.2", 00:43:08.988 "adrfam": "ipv4", 00:43:08.988 "trsvcid": "4420", 00:43:08.988 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:08.988 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:08.988 "hdgst": true, 00:43:08.988 "ddgst": true 00:43:08.988 }, 00:43:08.988 "method": "bdev_nvme_attach_controller" 00:43:08.988 }' 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:08.988 06:49:00 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:09.245 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:09.245 ... 00:43:09.245 fio-3.35 00:43:09.245 Starting 3 threads 00:43:21.491 00:43:21.491 filename0: (groupid=0, jobs=1): err= 0: pid=1309422: Fri Dec 13 06:49:11 2024 00:43:21.491 read: IOPS=289, BW=36.1MiB/s (37.9MB/s)(363MiB/10046msec) 00:43:21.491 slat (nsec): min=6417, max=30506, avg=11485.09, stdev=1861.51 00:43:21.491 clat (usec): min=7810, max=50496, avg=10349.88, stdev=1261.26 00:43:21.491 lat (usec): min=7818, max=50508, avg=10361.37, stdev=1261.24 00:43:21.491 clat percentiles (usec): 00:43:21.491 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9634], 00:43:21.491 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:43:21.491 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11338], 95.00th=[11600], 00:43:21.491 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14222], 99.95th=[46400], 00:43:21.491 | 99.99th=[50594] 00:43:21.491 bw ( KiB/s): min=35584, max=38400, per=35.42%, avg=37145.60, stdev=821.81, samples=20 00:43:21.491 iops : min= 278, max= 300, avg=290.20, stdev= 6.42, samples=20 00:43:21.491 lat (msec) : 10=34.09%, 20=65.84%, 50=0.03%, 100=0.03% 00:43:21.491 cpu : usr=94.31%, sys=5.39%, ctx=22, majf=0, minf=55 00:43:21.491 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 issued rwts: total=2904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:21.491 filename0: (groupid=0, jobs=1): err= 0: pid=1309423: Fri Dec 13 06:49:11 2024 00:43:21.491 read: IOPS=269, BW=33.6MiB/s (35.3MB/s)(338MiB/10045msec) 00:43:21.491 slat (nsec): min=6484, max=22452, avg=11528.85, stdev=1673.62 00:43:21.491 clat (usec): min=7532, max=48013, avg=11119.92, stdev=1270.70 00:43:21.491 lat (usec): min=7544, max=48026, avg=11131.44, stdev=1270.69 00:43:21.491 clat percentiles (usec): 00:43:21.491 | 1.00th=[ 9110], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10421], 00:43:21.491 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:43:21.491 | 70.00th=[11469], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:43:21.491 | 99.00th=[12911], 99.50th=[13173], 99.90th=[15270], 99.95th=[45351], 00:43:21.491 | 99.99th=[47973] 00:43:21.491 bw ( KiB/s): min=33024, max=36352, per=32.97%, avg=34572.80, stdev=911.65, samples=20 00:43:21.491 iops : min= 258, max= 284, avg=270.10, stdev= 7.12, samples=20 00:43:21.491 lat (msec) : 10=9.47%, 20=90.46%, 50=0.07% 00:43:21.491 cpu : usr=94.54%, sys=5.16%, ctx=20, majf=0, minf=41 00:43:21.491 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 issued rwts: total=2703,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:21.491 filename0: (groupid=0, jobs=1): err= 0: pid=1309424: Fri Dec 13 06:49:11 2024 00:43:21.491 read: IOPS=261, BW=32.6MiB/s (34.2MB/s)(328MiB/10046msec) 00:43:21.491 slat (nsec): min=6458, max=40052, avg=11481.17, stdev=1802.36 00:43:21.491 clat (usec): min=7325, max=47522, avg=11460.49, stdev=1266.39 00:43:21.491 lat (usec): min=7337, max=47533, avg=11471.97, stdev=1266.44 00:43:21.491 clat percentiles (usec): 00:43:21.491 | 1.00th=[ 9634], 5.00th=[10159], 10.00th=[10421], 20.00th=[10814], 00:43:21.491 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:43:21.491 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12518], 95.00th=[12911], 00:43:21.491 | 99.00th=[13698], 99.50th=[13960], 99.90th=[15270], 99.95th=[45351], 00:43:21.491 | 99.99th=[47449] 00:43:21.491 bw ( KiB/s): min=32256, max=34304, per=31.99%, avg=33548.80, stdev=553.91, samples=20 00:43:21.491 iops : min= 252, max= 268, avg=262.10, stdev= 4.33, samples=20 00:43:21.491 lat (msec) : 10=3.28%, 20=96.65%, 50=0.08% 00:43:21.491 cpu : usr=94.70%, sys=4.99%, ctx=27, majf=0, minf=68 00:43:21.491 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:21.491 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:21.491 issued rwts: total=2623,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:21.491 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:21.491 00:43:21.491 Run status group 0 (all jobs): 00:43:21.491 READ: bw=102MiB/s (107MB/s), 32.6MiB/s-36.1MiB/s (34.2MB/s-37.9MB/s), io=1029MiB (1079MB), run=10045-10046msec 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:21.491 00:43:21.491 real 0m11.311s 00:43:21.491 user 0m35.338s 00:43:21.491 sys 0m1.922s 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:21.491 06:49:11 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:21.491 ************************************ 00:43:21.491 END TEST fio_dif_digest 00:43:21.491 ************************************ 00:43:21.491 06:49:11 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:21.491 06:49:11 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:21.491 rmmod nvme_tcp 00:43:21.491 rmmod nvme_fabrics 00:43:21.491 rmmod nvme_keyring 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 1301036 ']' 00:43:21.491 06:49:11 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 1301036 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1301036 ']' 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1301036 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1301036 00:43:21.491 06:49:11 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:21.492 06:49:11 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:21.492 06:49:11 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1301036' 00:43:21.492 killing process with pid 1301036 00:43:21.492 06:49:11 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1301036 00:43:21.492 06:49:11 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1301036 00:43:21.492 06:49:12 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:21.492 06:49:12 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:23.398 Waiting for block devices as requested 00:43:23.398 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:23.398 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:23.398 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:23.398 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:23.657 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:23.657 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:23.657 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:23.915 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:23.915 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:23.915 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:24.174 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:24.174 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:24.174 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:24.174 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:24.433 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:24.433 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:24.433 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:24.692 06:49:16 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:24.692 06:49:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:24.692 06:49:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.597 06:49:18 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:26.597 00:43:26.597 real 1m14.160s 00:43:26.597 user 7m10.641s 00:43:26.597 sys 0m20.635s 00:43:26.597 06:49:18 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:26.597 06:49:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:26.597 ************************************ 00:43:26.597 END TEST nvmf_dif 00:43:26.597 ************************************ 00:43:26.597 06:49:18 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:26.597 06:49:18 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:26.597 06:49:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:26.597 06:49:18 -- common/autotest_common.sh@10 -- # set +x 00:43:26.856 ************************************ 00:43:26.856 START TEST nvmf_abort_qd_sizes 00:43:26.856 ************************************ 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:26.856 * Looking for test storage... 00:43:26.856 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:26.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.856 --rc genhtml_branch_coverage=1 00:43:26.856 --rc genhtml_function_coverage=1 00:43:26.856 --rc genhtml_legend=1 00:43:26.856 --rc geninfo_all_blocks=1 00:43:26.856 --rc geninfo_unexecuted_blocks=1 00:43:26.856 00:43:26.856 ' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:26.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.856 --rc genhtml_branch_coverage=1 00:43:26.856 --rc genhtml_function_coverage=1 00:43:26.856 --rc genhtml_legend=1 00:43:26.856 --rc geninfo_all_blocks=1 00:43:26.856 --rc geninfo_unexecuted_blocks=1 00:43:26.856 00:43:26.856 ' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:26.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.856 --rc genhtml_branch_coverage=1 00:43:26.856 --rc genhtml_function_coverage=1 00:43:26.856 --rc genhtml_legend=1 00:43:26.856 --rc geninfo_all_blocks=1 00:43:26.856 --rc geninfo_unexecuted_blocks=1 00:43:26.856 00:43:26.856 ' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:26.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:26.856 --rc genhtml_branch_coverage=1 00:43:26.856 --rc genhtml_function_coverage=1 00:43:26.856 --rc genhtml_legend=1 00:43:26.856 --rc geninfo_all_blocks=1 00:43:26.856 --rc geninfo_unexecuted_blocks=1 00:43:26.856 00:43:26.856 ' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:26.856 06:49:18 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:26.857 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:26.857 06:49:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:33.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:33.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:33.428 Found net devices under 0000:af:00.0: cvl_0_0 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:33.428 Found net devices under 0000:af:00.1: cvl_0_1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:33.428 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:33.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:33.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.470 ms 00:43:33.429 00:43:33.429 --- 10.0.0.2 ping statistics --- 00:43:33.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.429 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:33.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:33.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:43:33.429 00:43:33.429 --- 10.0.0.1 ping statistics --- 00:43:33.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:33.429 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:33.429 06:49:24 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:35.963 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:35.963 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:36.530 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=1317196 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 1317196 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1317196 ']' 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:36.530 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:36.530 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:36.530 [2024-12-13 06:49:28.178546] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:43:36.530 [2024-12-13 06:49:28.178591] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:36.788 [2024-12-13 06:49:28.258689] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:36.788 [2024-12-13 06:49:28.282498] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:36.788 [2024-12-13 06:49:28.282536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:36.788 [2024-12-13 06:49:28.282546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:36.788 [2024-12-13 06:49:28.282551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:36.788 [2024-12-13 06:49:28.282556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:36.788 [2024-12-13 06:49:28.283831] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:36.788 [2024-12-13 06:49:28.283941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:36.788 [2024-12-13 06:49:28.284046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:36.788 [2024-12-13 06:49:28.284047] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:36.788 06:49:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:37.046 ************************************ 00:43:37.046 START TEST spdk_target_abort 00:43:37.046 ************************************ 00:43:37.046 06:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:37.046 06:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:37.046 06:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:37.046 06:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:37.046 06:49:28 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:40.330 spdk_targetn1 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:40.330 [2024-12-13 06:49:31.289029] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:40.330 [2024-12-13 06:49:31.345358] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:40.330 06:49:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:43.608 Initializing NVMe Controllers 00:43:43.608 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:43.608 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:43.608 Initialization complete. Launching workers. 00:43:43.608 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15518, failed: 0 00:43:43.608 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1406, failed to submit 14112 00:43:43.608 success 746, unsuccessful 660, failed 0 00:43:43.608 06:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:43.608 06:49:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:46.889 Initializing NVMe Controllers 00:43:46.889 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:46.889 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:46.889 Initialization complete. Launching workers. 00:43:46.889 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8548, failed: 0 00:43:46.889 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1242, failed to submit 7306 00:43:46.889 success 349, unsuccessful 893, failed 0 00:43:46.889 06:49:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:46.889 06:49:37 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:50.168 Initializing NVMe Controllers 00:43:50.168 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:43:50.168 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:50.168 Initialization complete. Launching workers. 00:43:50.168 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38776, failed: 0 00:43:50.168 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2848, failed to submit 35928 00:43:50.168 success 569, unsuccessful 2279, failed 0 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:50.168 06:49:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1317196 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1317196 ']' 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1317196 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1317196 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1317196' 00:43:51.100 killing process with pid 1317196 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1317196 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1317196 00:43:51.100 00:43:51.100 real 0m14.199s 00:43:51.100 user 0m54.387s 00:43:51.100 sys 0m2.287s 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:51.100 06:49:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 ************************************ 00:43:51.100 END TEST spdk_target_abort 00:43:51.100 ************************************ 00:43:51.100 06:49:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:43:51.100 06:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:51.100 06:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:51.100 06:49:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:51.100 ************************************ 00:43:51.100 START TEST kernel_target_abort 00:43:51.100 ************************************ 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:43:51.101 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:43:51.359 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:43:51.360 06:49:42 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:53.895 Waiting for block devices as requested 00:43:53.895 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:54.153 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:54.153 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:54.153 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:54.413 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:54.413 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:54.413 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:54.413 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:54.671 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:54.671 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:54.671 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:54.930 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:54.930 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:54.930 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:54.930 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:55.188 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:55.188 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:43:55.188 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:43:55.447 No valid GPT data, bailing 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:43:55.447 00:43:55.447 Discovery Log Number of Records 2, Generation counter 2 00:43:55.447 =====Discovery Log Entry 0====== 00:43:55.447 trtype: tcp 00:43:55.447 adrfam: ipv4 00:43:55.447 subtype: current discovery subsystem 00:43:55.447 treq: not specified, sq flow control disable supported 00:43:55.447 portid: 1 00:43:55.447 trsvcid: 4420 00:43:55.447 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:43:55.447 traddr: 10.0.0.1 00:43:55.447 eflags: none 00:43:55.447 sectype: none 00:43:55.447 =====Discovery Log Entry 1====== 00:43:55.447 trtype: tcp 00:43:55.447 adrfam: ipv4 00:43:55.447 subtype: nvme subsystem 00:43:55.447 treq: not specified, sq flow control disable supported 00:43:55.447 portid: 1 00:43:55.447 trsvcid: 4420 00:43:55.447 subnqn: nqn.2016-06.io.spdk:testnqn 00:43:55.447 traddr: 10.0.0.1 00:43:55.447 eflags: none 00:43:55.447 sectype: none 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:55.447 06:49:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:43:58.729 Initializing NVMe Controllers 00:43:58.729 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:43:58.729 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:43:58.729 Initialization complete. Launching workers. 00:43:58.729 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96169, failed: 0 00:43:58.729 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 96169, failed to submit 0 00:43:58.729 success 0, unsuccessful 96169, failed 0 00:43:58.729 06:49:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:43:58.729 06:49:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:02.014 Initializing NVMe Controllers 00:44:02.014 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:02.014 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:02.014 Initialization complete. Launching workers. 00:44:02.014 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 149052, failed: 0 00:44:02.014 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37410, failed to submit 111642 00:44:02.014 success 0, unsuccessful 37410, failed 0 00:44:02.014 06:49:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:02.014 06:49:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:05.294 Initializing NVMe Controllers 00:44:05.294 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:05.294 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:05.294 Initialization complete. Launching workers. 00:44:05.294 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 143288, failed: 0 00:44:05.294 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35878, failed to submit 107410 00:44:05.294 success 0, unsuccessful 35878, failed 0 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:05.294 06:49:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:07.830 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:07.830 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:08.520 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:08.520 00:44:08.520 real 0m17.431s 00:44:08.520 user 0m9.176s 00:44:08.520 sys 0m4.975s 00:44:08.520 06:50:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:08.520 06:50:00 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:08.520 ************************************ 00:44:08.520 END TEST kernel_target_abort 00:44:08.520 ************************************ 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:08.813 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:08.814 rmmod nvme_tcp 00:44:08.814 rmmod nvme_fabrics 00:44:08.814 rmmod nvme_keyring 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 1317196 ']' 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 1317196 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1317196 ']' 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1317196 00:44:08.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1317196) - No such process 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1317196 is not found' 00:44:08.814 Process with pid 1317196 is not found 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:08.814 06:50:00 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:11.348 Waiting for block devices as requested 00:44:11.348 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:11.607 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:11.607 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:11.607 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:11.866 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:11.866 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:11.866 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:12.126 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:12.126 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:12.126 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:12.385 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:12.385 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:12.385 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:12.385 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:12.645 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:12.645 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:12.645 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:12.904 06:50:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:14.808 06:50:06 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:14.808 00:44:14.808 real 0m48.137s 00:44:14.808 user 1m7.855s 00:44:14.808 sys 0m15.929s 00:44:14.808 06:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:14.808 06:50:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:14.808 ************************************ 00:44:14.808 END TEST nvmf_abort_qd_sizes 00:44:14.808 ************************************ 00:44:14.808 06:50:06 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:14.808 06:50:06 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:14.808 06:50:06 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:14.808 06:50:06 -- common/autotest_common.sh@10 -- # set +x 00:44:15.068 ************************************ 00:44:15.068 START TEST keyring_file 00:44:15.068 ************************************ 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:15.068 * Looking for test storage... 00:44:15.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.068 --rc genhtml_branch_coverage=1 00:44:15.068 --rc genhtml_function_coverage=1 00:44:15.068 --rc genhtml_legend=1 00:44:15.068 --rc geninfo_all_blocks=1 00:44:15.068 --rc geninfo_unexecuted_blocks=1 00:44:15.068 00:44:15.068 ' 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.068 --rc genhtml_branch_coverage=1 00:44:15.068 --rc genhtml_function_coverage=1 00:44:15.068 --rc genhtml_legend=1 00:44:15.068 --rc geninfo_all_blocks=1 00:44:15.068 --rc geninfo_unexecuted_blocks=1 00:44:15.068 00:44:15.068 ' 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.068 --rc genhtml_branch_coverage=1 00:44:15.068 --rc genhtml_function_coverage=1 00:44:15.068 --rc genhtml_legend=1 00:44:15.068 --rc geninfo_all_blocks=1 00:44:15.068 --rc geninfo_unexecuted_blocks=1 00:44:15.068 00:44:15.068 ' 00:44:15.068 06:50:06 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:15.068 --rc genhtml_branch_coverage=1 00:44:15.068 --rc genhtml_function_coverage=1 00:44:15.068 --rc genhtml_legend=1 00:44:15.068 --rc geninfo_all_blocks=1 00:44:15.068 --rc geninfo_unexecuted_blocks=1 00:44:15.068 00:44:15.068 ' 00:44:15.068 06:50:06 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:15.068 06:50:06 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:15.068 06:50:06 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:15.068 06:50:06 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.068 06:50:06 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.068 06:50:06 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.068 06:50:06 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:15.068 06:50:06 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:15.068 06:50:06 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:15.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:15.069 06:50:06 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.O6uLCjWVSr 00:44:15.069 06:50:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:15.069 06:50:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:15.327 06:50:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.O6uLCjWVSr 00:44:15.327 06:50:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.O6uLCjWVSr 00:44:15.327 06:50:06 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.O6uLCjWVSr 00:44:15.327 06:50:06 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Wn6N3fBkfd 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:15.328 06:50:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Wn6N3fBkfd 00:44:15.328 06:50:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Wn6N3fBkfd 00:44:15.328 06:50:06 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.Wn6N3fBkfd 00:44:15.328 06:50:06 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:15.328 06:50:06 keyring_file -- keyring/file.sh@30 -- # tgtpid=1325794 00:44:15.328 06:50:06 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1325794 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325794 ']' 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:15.328 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:15.328 06:50:06 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:15.328 [2024-12-13 06:50:06.838522] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:15.328 [2024-12-13 06:50:06.838569] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325794 ] 00:44:15.328 [2024-12-13 06:50:06.914747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:15.328 [2024-12-13 06:50:06.937414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:15.587 06:50:07 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:15.587 [2024-12-13 06:50:07.135852] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:15.587 null0 00:44:15.587 [2024-12-13 06:50:07.167902] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:15.587 [2024-12-13 06:50:07.168185] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:15.587 06:50:07 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:15.587 [2024-12-13 06:50:07.195969] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:15.587 request: 00:44:15.587 { 00:44:15.587 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:15.587 "secure_channel": false, 00:44:15.587 "listen_address": { 00:44:15.587 "trtype": "tcp", 00:44:15.587 "traddr": "127.0.0.1", 00:44:15.587 "trsvcid": "4420" 00:44:15.587 }, 00:44:15.587 "method": "nvmf_subsystem_add_listener", 00:44:15.587 "req_id": 1 00:44:15.587 } 00:44:15.587 Got JSON-RPC error response 00:44:15.587 response: 00:44:15.587 { 00:44:15.587 "code": -32602, 00:44:15.587 "message": "Invalid parameters" 00:44:15.587 } 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:15.587 06:50:07 keyring_file -- keyring/file.sh@47 -- # bperfpid=1325862 00:44:15.587 06:50:07 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1325862 /var/tmp/bperf.sock 00:44:15.587 06:50:07 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1325862 ']' 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:15.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:15.587 06:50:07 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:15.846 [2024-12-13 06:50:07.247477] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:15.846 [2024-12-13 06:50:07.247516] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1325862 ] 00:44:15.846 [2024-12-13 06:50:07.322171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:15.846 [2024-12-13 06:50:07.344508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:15.846 06:50:07 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:15.846 06:50:07 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:15.846 06:50:07 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:15.846 06:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:16.104 06:50:07 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wn6N3fBkfd 00:44:16.104 06:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wn6N3fBkfd 00:44:16.363 06:50:07 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:16.363 06:50:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:16.363 06:50:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.363 06:50:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:16.363 06:50:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.363 06:50:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.O6uLCjWVSr == \/\t\m\p\/\t\m\p\.\O\6\u\L\C\j\W\V\S\r ]] 00:44:16.363 06:50:07 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:16.363 06:50:07 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:16.363 06:50:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.363 06:50:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:16.363 06:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.621 06:50:08 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.Wn6N3fBkfd == \/\t\m\p\/\t\m\p\.\W\n\6\N\3\f\B\k\f\d ]] 00:44:16.621 06:50:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:16.621 06:50:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:16.621 06:50:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:16.621 06:50:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.621 06:50:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:16.621 06:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:16.880 06:50:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:16.880 06:50:08 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:16.880 06:50:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:16.880 06:50:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:16.880 06:50:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:16.880 06:50:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:16.880 06:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:17.139 06:50:08 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:17.139 06:50:08 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:17.139 06:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:17.139 [2024-12-13 06:50:08.785993] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:17.397 nvme0n1 00:44:17.397 06:50:08 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:17.397 06:50:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:17.397 06:50:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:17.397 06:50:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:17.397 06:50:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:17.397 06:50:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:17.656 06:50:09 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:17.656 06:50:09 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:17.656 06:50:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:17.656 06:50:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:17.656 06:50:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:17.656 06:50:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:17.656 06:50:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:17.656 06:50:09 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:17.656 06:50:09 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:17.914 Running I/O for 1 seconds... 00:44:18.851 19229.00 IOPS, 75.11 MiB/s 00:44:18.851 Latency(us) 00:44:18.851 [2024-12-13T05:50:10.505Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:18.851 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:18.851 nvme0n1 : 1.00 19275.58 75.30 0.00 0.00 6628.49 2668.25 9986.44 00:44:18.851 [2024-12-13T05:50:10.505Z] =================================================================================================================== 00:44:18.851 [2024-12-13T05:50:10.505Z] Total : 19275.58 75.30 0.00 0.00 6628.49 2668.25 9986.44 00:44:18.851 { 00:44:18.851 "results": [ 00:44:18.851 { 00:44:18.851 "job": "nvme0n1", 00:44:18.851 "core_mask": "0x2", 00:44:18.851 "workload": "randrw", 00:44:18.851 "percentage": 50, 00:44:18.851 "status": "finished", 00:44:18.851 "queue_depth": 128, 00:44:18.851 "io_size": 4096, 00:44:18.851 "runtime": 1.004276, 00:44:18.851 "iops": 19275.57763005389, 00:44:18.851 "mibps": 75.295225117398, 00:44:18.851 "io_failed": 0, 00:44:18.851 "io_timeout": 0, 00:44:18.851 "avg_latency_us": 6628.493932962378, 00:44:18.851 "min_latency_us": 2668.2514285714287, 00:44:18.851 "max_latency_us": 9986.438095238096 00:44:18.851 } 00:44:18.851 ], 00:44:18.851 "core_count": 1 00:44:18.851 } 00:44:18.851 06:50:10 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:18.851 06:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:19.110 06:50:10 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:19.110 06:50:10 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:19.110 06:50:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.110 06:50:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.110 06:50:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:19.110 06:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.369 06:50:10 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:19.369 06:50:10 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:19.369 06:50:10 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:19.369 06:50:10 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.369 06:50:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.369 06:50:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:19.369 06:50:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.369 06:50:11 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:19.369 06:50:11 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:19.369 06:50:11 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:19.369 06:50:11 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:19.369 06:50:11 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:19.627 [2024-12-13 06:50:11.210225] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:19.627 [2024-12-13 06:50:11.210978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc76a0 (107): Transport endpoint is not connected 00:44:19.627 [2024-12-13 06:50:11.211975] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc76a0 (9): Bad file descriptor 00:44:19.627 [2024-12-13 06:50:11.212976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:19.627 [2024-12-13 06:50:11.212985] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:19.627 [2024-12-13 06:50:11.212992] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:19.627 [2024-12-13 06:50:11.213001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:19.627 request: 00:44:19.627 { 00:44:19.627 "name": "nvme0", 00:44:19.627 "trtype": "tcp", 00:44:19.627 "traddr": "127.0.0.1", 00:44:19.627 "adrfam": "ipv4", 00:44:19.627 "trsvcid": "4420", 00:44:19.627 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:19.627 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:19.627 "prchk_reftag": false, 00:44:19.627 "prchk_guard": false, 00:44:19.627 "hdgst": false, 00:44:19.627 "ddgst": false, 00:44:19.627 "psk": "key1", 00:44:19.627 "allow_unrecognized_csi": false, 00:44:19.627 "method": "bdev_nvme_attach_controller", 00:44:19.627 "req_id": 1 00:44:19.627 } 00:44:19.627 Got JSON-RPC error response 00:44:19.627 response: 00:44:19.627 { 00:44:19.627 "code": -5, 00:44:19.627 "message": "Input/output error" 00:44:19.627 } 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:19.627 06:50:11 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:19.627 06:50:11 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:19.627 06:50:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:19.885 06:50:11 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:19.885 06:50:11 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:19.885 06:50:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:19.885 06:50:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:19.885 06:50:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:19.885 06:50:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:19.885 06:50:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:20.143 06:50:11 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:20.143 06:50:11 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:20.143 06:50:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:20.400 06:50:11 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:20.401 06:50:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:20.401 06:50:11 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:20.401 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:20.401 06:50:12 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:20.658 06:50:12 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:20.658 06:50:12 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.O6uLCjWVSr 00:44:20.658 06:50:12 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:20.658 06:50:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.658 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.916 [2024-12-13 06:50:12.366147] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.O6uLCjWVSr': 0100660 00:44:20.916 [2024-12-13 06:50:12.366171] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:20.916 request: 00:44:20.916 { 00:44:20.916 "name": "key0", 00:44:20.916 "path": "/tmp/tmp.O6uLCjWVSr", 00:44:20.916 "method": "keyring_file_add_key", 00:44:20.916 "req_id": 1 00:44:20.916 } 00:44:20.916 Got JSON-RPC error response 00:44:20.916 response: 00:44:20.916 { 00:44:20.916 "code": -1, 00:44:20.916 "message": "Operation not permitted" 00:44:20.916 } 00:44:20.916 06:50:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:20.916 06:50:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:20.916 06:50:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:20.916 06:50:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:20.916 06:50:12 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.O6uLCjWVSr 00:44:20.916 06:50:12 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.916 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.O6uLCjWVSr 00:44:20.916 06:50:12 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.O6uLCjWVSr 00:44:21.174 06:50:12 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:21.174 06:50:12 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:21.174 06:50:12 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:21.174 06:50:12 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:21.174 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:21.432 [2024-12-13 06:50:12.939661] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.O6uLCjWVSr': No such file or directory 00:44:21.432 [2024-12-13 06:50:12.939677] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:21.432 [2024-12-13 06:50:12.939691] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:21.432 [2024-12-13 06:50:12.939699] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:21.432 [2024-12-13 06:50:12.939706] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:21.432 [2024-12-13 06:50:12.939711] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:21.432 request: 00:44:21.432 { 00:44:21.432 "name": "nvme0", 00:44:21.432 "trtype": "tcp", 00:44:21.432 "traddr": "127.0.0.1", 00:44:21.432 "adrfam": "ipv4", 00:44:21.432 "trsvcid": "4420", 00:44:21.432 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:21.432 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:21.432 "prchk_reftag": false, 00:44:21.432 "prchk_guard": false, 00:44:21.432 "hdgst": false, 00:44:21.432 "ddgst": false, 00:44:21.432 "psk": "key0", 00:44:21.432 "allow_unrecognized_csi": false, 00:44:21.432 "method": "bdev_nvme_attach_controller", 00:44:21.432 "req_id": 1 00:44:21.432 } 00:44:21.432 Got JSON-RPC error response 00:44:21.432 response: 00:44:21.432 { 00:44:21.432 "code": -19, 00:44:21.432 "message": "No such device" 00:44:21.432 } 00:44:21.432 06:50:12 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:21.432 06:50:12 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:21.432 06:50:12 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:21.432 06:50:12 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:21.432 06:50:12 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:21.432 06:50:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:21.690 06:50:13 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.GrXP0eL1Ah 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:21.690 06:50:13 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.GrXP0eL1Ah 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.GrXP0eL1Ah 00:44:21.690 06:50:13 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.GrXP0eL1Ah 00:44:21.690 06:50:13 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GrXP0eL1Ah 00:44:21.690 06:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GrXP0eL1Ah 00:44:21.948 06:50:13 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:21.948 06:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:22.205 nvme0n1 00:44:22.205 06:50:13 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:22.205 06:50:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:22.205 06:50:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:22.205 06:50:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:22.205 06:50:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:22.205 06:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.463 06:50:13 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:22.463 06:50:13 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:22.463 06:50:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:22.463 06:50:14 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:22.463 06:50:14 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:22.463 06:50:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:22.463 06:50:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:22.463 06:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.721 06:50:14 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:22.721 06:50:14 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:22.721 06:50:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:22.721 06:50:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:22.721 06:50:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:22.721 06:50:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:22.721 06:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:22.979 06:50:14 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:22.979 06:50:14 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:22.979 06:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:23.302 06:50:14 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:23.302 06:50:14 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:23.302 06:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:23.302 06:50:14 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:23.302 06:50:14 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.GrXP0eL1Ah 00:44:23.302 06:50:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.GrXP0eL1Ah 00:44:23.643 06:50:15 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.Wn6N3fBkfd 00:44:23.643 06:50:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.Wn6N3fBkfd 00:44:23.643 06:50:15 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:23.643 06:50:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:23.901 nvme0n1 00:44:23.901 06:50:15 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:23.901 06:50:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:24.159 06:50:15 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:24.159 "subsystems": [ 00:44:24.159 { 00:44:24.159 "subsystem": "keyring", 00:44:24.159 "config": [ 00:44:24.159 { 00:44:24.159 "method": "keyring_file_add_key", 00:44:24.159 "params": { 00:44:24.159 "name": "key0", 00:44:24.159 "path": "/tmp/tmp.GrXP0eL1Ah" 00:44:24.159 } 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "method": "keyring_file_add_key", 00:44:24.159 "params": { 00:44:24.159 "name": "key1", 00:44:24.159 "path": "/tmp/tmp.Wn6N3fBkfd" 00:44:24.159 } 00:44:24.159 } 00:44:24.159 ] 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "subsystem": "iobuf", 00:44:24.159 "config": [ 00:44:24.159 { 00:44:24.159 "method": "iobuf_set_options", 00:44:24.159 "params": { 00:44:24.159 "small_pool_count": 8192, 00:44:24.159 "large_pool_count": 1024, 00:44:24.159 "small_bufsize": 8192, 00:44:24.159 "large_bufsize": 135168, 00:44:24.159 "enable_numa": false 00:44:24.159 } 00:44:24.159 } 00:44:24.159 ] 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "subsystem": "sock", 00:44:24.159 "config": [ 00:44:24.159 { 00:44:24.159 "method": "sock_set_default_impl", 00:44:24.159 "params": { 00:44:24.159 "impl_name": "posix" 00:44:24.159 } 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "method": "sock_impl_set_options", 00:44:24.159 "params": { 00:44:24.159 "impl_name": "ssl", 00:44:24.159 "recv_buf_size": 4096, 00:44:24.159 "send_buf_size": 4096, 00:44:24.159 "enable_recv_pipe": true, 00:44:24.159 "enable_quickack": false, 00:44:24.159 "enable_placement_id": 0, 00:44:24.159 "enable_zerocopy_send_server": true, 00:44:24.159 "enable_zerocopy_send_client": false, 00:44:24.159 "zerocopy_threshold": 0, 00:44:24.159 "tls_version": 0, 00:44:24.159 "enable_ktls": false 00:44:24.159 } 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "method": "sock_impl_set_options", 00:44:24.159 "params": { 00:44:24.159 "impl_name": "posix", 00:44:24.159 "recv_buf_size": 2097152, 00:44:24.159 "send_buf_size": 2097152, 00:44:24.159 "enable_recv_pipe": true, 00:44:24.159 "enable_quickack": false, 00:44:24.159 "enable_placement_id": 0, 00:44:24.159 "enable_zerocopy_send_server": true, 00:44:24.159 "enable_zerocopy_send_client": false, 00:44:24.159 "zerocopy_threshold": 0, 00:44:24.159 "tls_version": 0, 00:44:24.159 "enable_ktls": false 00:44:24.159 } 00:44:24.159 } 00:44:24.159 ] 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "subsystem": "vmd", 00:44:24.159 "config": [] 00:44:24.159 }, 00:44:24.159 { 00:44:24.159 "subsystem": "accel", 00:44:24.159 "config": [ 00:44:24.159 { 00:44:24.159 "method": "accel_set_options", 00:44:24.159 "params": { 00:44:24.159 "small_cache_size": 128, 00:44:24.159 "large_cache_size": 16, 00:44:24.159 "task_count": 2048, 00:44:24.159 "sequence_count": 2048, 00:44:24.159 "buf_count": 2048 00:44:24.159 } 00:44:24.159 } 00:44:24.159 ] 00:44:24.159 }, 00:44:24.160 { 00:44:24.160 "subsystem": "bdev", 00:44:24.160 "config": [ 00:44:24.160 { 00:44:24.160 "method": "bdev_set_options", 00:44:24.160 "params": { 00:44:24.160 "bdev_io_pool_size": 65535, 00:44:24.160 "bdev_io_cache_size": 256, 00:44:24.160 "bdev_auto_examine": true, 00:44:24.160 "iobuf_small_cache_size": 128, 00:44:24.160 "iobuf_large_cache_size": 16 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_raid_set_options", 00:44:24.160 "params": { 00:44:24.160 "process_window_size_kb": 1024, 00:44:24.160 "process_max_bandwidth_mb_sec": 0 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_iscsi_set_options", 00:44:24.160 "params": { 00:44:24.160 "timeout_sec": 30 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_nvme_set_options", 00:44:24.160 "params": { 00:44:24.160 "action_on_timeout": "none", 00:44:24.160 "timeout_us": 0, 00:44:24.160 "timeout_admin_us": 0, 00:44:24.160 "keep_alive_timeout_ms": 10000, 00:44:24.160 "arbitration_burst": 0, 00:44:24.160 "low_priority_weight": 0, 00:44:24.160 "medium_priority_weight": 0, 00:44:24.160 "high_priority_weight": 0, 00:44:24.160 "nvme_adminq_poll_period_us": 10000, 00:44:24.160 "nvme_ioq_poll_period_us": 0, 00:44:24.160 "io_queue_requests": 512, 00:44:24.160 "delay_cmd_submit": true, 00:44:24.160 "transport_retry_count": 4, 00:44:24.160 "bdev_retry_count": 3, 00:44:24.160 "transport_ack_timeout": 0, 00:44:24.160 "ctrlr_loss_timeout_sec": 0, 00:44:24.160 "reconnect_delay_sec": 0, 00:44:24.160 "fast_io_fail_timeout_sec": 0, 00:44:24.160 "disable_auto_failback": false, 00:44:24.160 "generate_uuids": false, 00:44:24.160 "transport_tos": 0, 00:44:24.160 "nvme_error_stat": false, 00:44:24.160 "rdma_srq_size": 0, 00:44:24.160 "io_path_stat": false, 00:44:24.160 "allow_accel_sequence": false, 00:44:24.160 "rdma_max_cq_size": 0, 00:44:24.160 "rdma_cm_event_timeout_ms": 0, 00:44:24.160 "dhchap_digests": [ 00:44:24.160 "sha256", 00:44:24.160 "sha384", 00:44:24.160 "sha512" 00:44:24.160 ], 00:44:24.160 "dhchap_dhgroups": [ 00:44:24.160 "null", 00:44:24.160 "ffdhe2048", 00:44:24.160 "ffdhe3072", 00:44:24.160 "ffdhe4096", 00:44:24.160 "ffdhe6144", 00:44:24.160 "ffdhe8192" 00:44:24.160 ], 00:44:24.160 "rdma_umr_per_io": false 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_nvme_attach_controller", 00:44:24.160 "params": { 00:44:24.160 "name": "nvme0", 00:44:24.160 "trtype": "TCP", 00:44:24.160 "adrfam": "IPv4", 00:44:24.160 "traddr": "127.0.0.1", 00:44:24.160 "trsvcid": "4420", 00:44:24.160 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:24.160 "prchk_reftag": false, 00:44:24.160 "prchk_guard": false, 00:44:24.160 "ctrlr_loss_timeout_sec": 0, 00:44:24.160 "reconnect_delay_sec": 0, 00:44:24.160 "fast_io_fail_timeout_sec": 0, 00:44:24.160 "psk": "key0", 00:44:24.160 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:24.160 "hdgst": false, 00:44:24.160 "ddgst": false, 00:44:24.160 "multipath": "multipath" 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_nvme_set_hotplug", 00:44:24.160 "params": { 00:44:24.160 "period_us": 100000, 00:44:24.160 "enable": false 00:44:24.160 } 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "method": "bdev_wait_for_examine" 00:44:24.160 } 00:44:24.160 ] 00:44:24.160 }, 00:44:24.160 { 00:44:24.160 "subsystem": "nbd", 00:44:24.160 "config": [] 00:44:24.160 } 00:44:24.160 ] 00:44:24.160 }' 00:44:24.160 06:50:15 keyring_file -- keyring/file.sh@115 -- # killprocess 1325862 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325862 ']' 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325862 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325862 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325862' 00:44:24.160 killing process with pid 1325862 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@973 -- # kill 1325862 00:44:24.160 Received shutdown signal, test time was about 1.000000 seconds 00:44:24.160 00:44:24.160 Latency(us) 00:44:24.160 [2024-12-13T05:50:15.814Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:24.160 [2024-12-13T05:50:15.814Z] =================================================================================================================== 00:44:24.160 [2024-12-13T05:50:15.814Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:24.160 06:50:15 keyring_file -- common/autotest_common.sh@978 -- # wait 1325862 00:44:24.418 06:50:15 keyring_file -- keyring/file.sh@118 -- # bperfpid=1327339 00:44:24.418 06:50:15 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1327339 /var/tmp/bperf.sock 00:44:24.418 06:50:15 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1327339 ']' 00:44:24.418 06:50:15 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:24.418 06:50:15 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:24.418 06:50:15 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:24.418 06:50:15 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:24.418 06:50:15 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:24.418 "subsystems": [ 00:44:24.418 { 00:44:24.418 "subsystem": "keyring", 00:44:24.418 "config": [ 00:44:24.418 { 00:44:24.418 "method": "keyring_file_add_key", 00:44:24.418 "params": { 00:44:24.418 "name": "key0", 00:44:24.418 "path": "/tmp/tmp.GrXP0eL1Ah" 00:44:24.418 } 00:44:24.418 }, 00:44:24.418 { 00:44:24.418 "method": "keyring_file_add_key", 00:44:24.418 "params": { 00:44:24.418 "name": "key1", 00:44:24.418 "path": "/tmp/tmp.Wn6N3fBkfd" 00:44:24.418 } 00:44:24.418 } 00:44:24.418 ] 00:44:24.418 }, 00:44:24.418 { 00:44:24.418 "subsystem": "iobuf", 00:44:24.418 "config": [ 00:44:24.418 { 00:44:24.418 "method": "iobuf_set_options", 00:44:24.418 "params": { 00:44:24.418 "small_pool_count": 8192, 00:44:24.418 "large_pool_count": 1024, 00:44:24.418 "small_bufsize": 8192, 00:44:24.418 "large_bufsize": 135168, 00:44:24.418 "enable_numa": false 00:44:24.418 } 00:44:24.418 } 00:44:24.418 ] 00:44:24.418 }, 00:44:24.418 { 00:44:24.418 "subsystem": "sock", 00:44:24.418 "config": [ 00:44:24.418 { 00:44:24.418 "method": "sock_set_default_impl", 00:44:24.418 "params": { 00:44:24.418 "impl_name": "posix" 00:44:24.418 } 00:44:24.418 }, 00:44:24.418 { 00:44:24.418 "method": "sock_impl_set_options", 00:44:24.418 "params": { 00:44:24.418 "impl_name": "ssl", 00:44:24.418 "recv_buf_size": 4096, 00:44:24.419 "send_buf_size": 4096, 00:44:24.419 "enable_recv_pipe": true, 00:44:24.419 "enable_quickack": false, 00:44:24.419 "enable_placement_id": 0, 00:44:24.419 "enable_zerocopy_send_server": true, 00:44:24.419 "enable_zerocopy_send_client": false, 00:44:24.419 "zerocopy_threshold": 0, 00:44:24.419 "tls_version": 0, 00:44:24.419 "enable_ktls": false 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "sock_impl_set_options", 00:44:24.419 "params": { 00:44:24.419 "impl_name": "posix", 00:44:24.419 "recv_buf_size": 2097152, 00:44:24.419 "send_buf_size": 2097152, 00:44:24.419 "enable_recv_pipe": true, 00:44:24.419 "enable_quickack": false, 00:44:24.419 "enable_placement_id": 0, 00:44:24.419 "enable_zerocopy_send_server": true, 00:44:24.419 "enable_zerocopy_send_client": false, 00:44:24.419 "zerocopy_threshold": 0, 00:44:24.419 "tls_version": 0, 00:44:24.419 "enable_ktls": false 00:44:24.419 } 00:44:24.419 } 00:44:24.419 ] 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "subsystem": "vmd", 00:44:24.419 "config": [] 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "subsystem": "accel", 00:44:24.419 "config": [ 00:44:24.419 { 00:44:24.419 "method": "accel_set_options", 00:44:24.419 "params": { 00:44:24.419 "small_cache_size": 128, 00:44:24.419 "large_cache_size": 16, 00:44:24.419 "task_count": 2048, 00:44:24.419 "sequence_count": 2048, 00:44:24.419 "buf_count": 2048 00:44:24.419 } 00:44:24.419 } 00:44:24.419 ] 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "subsystem": "bdev", 00:44:24.419 "config": [ 00:44:24.419 { 00:44:24.419 "method": "bdev_set_options", 00:44:24.419 "params": { 00:44:24.419 "bdev_io_pool_size": 65535, 00:44:24.419 "bdev_io_cache_size": 256, 00:44:24.419 "bdev_auto_examine": true, 00:44:24.419 "iobuf_small_cache_size": 128, 00:44:24.419 "iobuf_large_cache_size": 16 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_raid_set_options", 00:44:24.419 "params": { 00:44:24.419 "process_window_size_kb": 1024, 00:44:24.419 "process_max_bandwidth_mb_sec": 0 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_iscsi_set_options", 00:44:24.419 "params": { 00:44:24.419 "timeout_sec": 30 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_nvme_set_options", 00:44:24.419 "params": { 00:44:24.419 "action_on_timeout": "none", 00:44:24.419 "timeout_us": 0, 00:44:24.419 "timeout_admin_us": 0, 00:44:24.419 "keep_alive_timeout_ms": 10000, 00:44:24.419 "arbitration_burst": 0, 00:44:24.419 "low_priority_weight": 0, 00:44:24.419 "medium_priority_weight": 0, 00:44:24.419 "high_priority_weight": 0, 00:44:24.419 "nvme_adminq_poll_period_us": 10000, 00:44:24.419 "nvme_ioq_poll_period_us": 0, 00:44:24.419 "io_queue_requests": 512, 00:44:24.419 "delay_cmd_submit": true, 00:44:24.419 "transport_retry_count": 4, 00:44:24.419 "bdev_retry_count": 3, 00:44:24.419 "transport_ack_timeout": 0, 00:44:24.419 "ctrlr_loss_timeout_sec": 0, 00:44:24.419 "reconnect_delay_sec": 0, 00:44:24.419 "fast_io_fail_timeout_sec": 0, 00:44:24.419 "disable_auto_failback": false, 00:44:24.419 "generate_uuids": false, 00:44:24.419 "transport_tos": 0, 00:44:24.419 "nvme_error_stat": false, 00:44:24.419 "rdma_srq_size": 0, 00:44:24.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:24.419 "io_path_stat": false, 00:44:24.419 "allow_accel_sequence": false, 00:44:24.419 "rdma_max_cq_size": 0, 00:44:24.419 "rdma_cm_event_timeout_ms": 0, 00:44:24.419 "dhchap_digests": [ 00:44:24.419 "sha256", 00:44:24.419 "sha384", 00:44:24.419 "sha512" 00:44:24.419 ], 00:44:24.419 "dhchap_dhgroups": [ 00:44:24.419 "null", 00:44:24.419 "ffdhe2048", 00:44:24.419 "ffdhe3072", 00:44:24.419 "ffdhe4096", 00:44:24.419 "ffdhe6144", 00:44:24.419 "ffdhe8192" 00:44:24.419 ], 00:44:24.419 "rdma_umr_per_io": false 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_nvme_attach_controller", 00:44:24.419 "params": { 00:44:24.419 "name": "nvme0", 00:44:24.419 "trtype": "TCP", 00:44:24.419 "adrfam": "IPv4", 00:44:24.419 "traddr": "127.0.0.1", 00:44:24.419 "trsvcid": "4420", 00:44:24.419 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:24.419 "prchk_reftag": false, 00:44:24.419 "prchk_guard": false, 00:44:24.419 "ctrlr_loss_timeout_sec": 0, 00:44:24.419 "reconnect_delay_sec": 0, 00:44:24.419 "fast_io_fail_timeout_sec": 0, 00:44:24.419 "psk": "key0", 00:44:24.419 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:24.419 "hdgst": false, 00:44:24.419 "ddgst": false, 00:44:24.419 "multipath": "multipath" 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_nvme_set_hotplug", 00:44:24.419 "params": { 00:44:24.419 "period_us": 100000, 00:44:24.419 "enable": false 00:44:24.419 } 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "method": "bdev_wait_for_examine" 00:44:24.419 } 00:44:24.419 ] 00:44:24.419 }, 00:44:24.419 { 00:44:24.419 "subsystem": "nbd", 00:44:24.419 "config": [] 00:44:24.419 } 00:44:24.419 ] 00:44:24.419 }' 00:44:24.419 06:50:15 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:24.419 06:50:15 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:24.419 [2024-12-13 06:50:15.975832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:24.419 [2024-12-13 06:50:15.975881] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327339 ] 00:44:24.419 [2024-12-13 06:50:16.050718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:24.419 [2024-12-13 06:50:16.069998] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:24.677 [2024-12-13 06:50:16.226297] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:25.242 06:50:16 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:25.242 06:50:16 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:25.242 06:50:16 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:25.243 06:50:16 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:25.243 06:50:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:25.500 06:50:17 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:25.500 06:50:17 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:25.501 06:50:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:25.501 06:50:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:25.501 06:50:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:25.501 06:50:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:25.501 06:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:25.758 06:50:17 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:25.758 06:50:17 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:25.758 06:50:17 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:25.758 06:50:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:25.758 06:50:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:25.758 06:50:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:25.758 06:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:26.016 06:50:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.GrXP0eL1Ah /tmp/tmp.Wn6N3fBkfd 00:44:26.016 06:50:17 keyring_file -- keyring/file.sh@20 -- # killprocess 1327339 00:44:26.016 06:50:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1327339 ']' 00:44:26.016 06:50:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1327339 00:44:26.016 06:50:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:26.016 06:50:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:26.016 06:50:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327339 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327339' 00:44:26.275 killing process with pid 1327339 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@973 -- # kill 1327339 00:44:26.275 Received shutdown signal, test time was about 1.000000 seconds 00:44:26.275 00:44:26.275 Latency(us) 00:44:26.275 [2024-12-13T05:50:17.929Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:26.275 [2024-12-13T05:50:17.929Z] =================================================================================================================== 00:44:26.275 [2024-12-13T05:50:17.929Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@978 -- # wait 1327339 00:44:26.275 06:50:17 keyring_file -- keyring/file.sh@21 -- # killprocess 1325794 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1325794 ']' 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1325794 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1325794 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1325794' 00:44:26.275 killing process with pid 1325794 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@973 -- # kill 1325794 00:44:26.275 06:50:17 keyring_file -- common/autotest_common.sh@978 -- # wait 1325794 00:44:26.533 00:44:26.533 real 0m11.684s 00:44:26.533 user 0m29.135s 00:44:26.533 sys 0m2.710s 00:44:26.533 06:50:18 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:26.533 06:50:18 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:26.533 ************************************ 00:44:26.533 END TEST keyring_file 00:44:26.533 ************************************ 00:44:26.792 06:50:18 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:26.792 06:50:18 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:26.792 06:50:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:26.792 06:50:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:26.792 06:50:18 -- common/autotest_common.sh@10 -- # set +x 00:44:26.792 ************************************ 00:44:26.792 START TEST keyring_linux 00:44:26.792 ************************************ 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:26.792 Joined session keyring: 923532888 00:44:26.792 * Looking for test storage... 00:44:26.792 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:26.792 06:50:18 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.792 --rc genhtml_branch_coverage=1 00:44:26.792 --rc genhtml_function_coverage=1 00:44:26.792 --rc genhtml_legend=1 00:44:26.792 --rc geninfo_all_blocks=1 00:44:26.792 --rc geninfo_unexecuted_blocks=1 00:44:26.792 00:44:26.792 ' 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.792 --rc genhtml_branch_coverage=1 00:44:26.792 --rc genhtml_function_coverage=1 00:44:26.792 --rc genhtml_legend=1 00:44:26.792 --rc geninfo_all_blocks=1 00:44:26.792 --rc geninfo_unexecuted_blocks=1 00:44:26.792 00:44:26.792 ' 00:44:26.792 06:50:18 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:26.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.792 --rc genhtml_branch_coverage=1 00:44:26.792 --rc genhtml_function_coverage=1 00:44:26.792 --rc genhtml_legend=1 00:44:26.792 --rc geninfo_all_blocks=1 00:44:26.792 --rc geninfo_unexecuted_blocks=1 00:44:26.792 00:44:26.792 ' 00:44:26.793 06:50:18 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:26.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:26.793 --rc genhtml_branch_coverage=1 00:44:26.793 --rc genhtml_function_coverage=1 00:44:26.793 --rc genhtml_legend=1 00:44:26.793 --rc geninfo_all_blocks=1 00:44:26.793 --rc geninfo_unexecuted_blocks=1 00:44:26.793 00:44:26.793 ' 00:44:26.793 06:50:18 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:26.793 06:50:18 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:26.793 06:50:18 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:27.052 06:50:18 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:27.052 06:50:18 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:27.052 06:50:18 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:27.052 06:50:18 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:27.052 06:50:18 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.052 06:50:18 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.052 06:50:18 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.052 06:50:18 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:27.052 06:50:18 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:27.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:27.052 /tmp/:spdk-test:key0 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:27.052 06:50:18 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:27.052 06:50:18 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:27.052 /tmp/:spdk-test:key1 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1327875 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1327875 00:44:27.052 06:50:18 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327875 ']' 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:27.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:27.052 06:50:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:27.052 [2024-12-13 06:50:18.600292] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:27.052 [2024-12-13 06:50:18.600343] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327875 ] 00:44:27.052 [2024-12-13 06:50:18.677333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.052 [2024-12-13 06:50:18.700092] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:27.310 06:50:18 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:27.310 06:50:18 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:27.310 06:50:18 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:27.310 06:50:18 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:27.310 06:50:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:27.310 [2024-12-13 06:50:18.903562] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:27.310 null0 00:44:27.310 [2024-12-13 06:50:18.935608] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:27.310 [2024-12-13 06:50:18.935923] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:27.311 06:50:18 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:27.311 06:50:18 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:27.311 661165331 00:44:27.311 06:50:18 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:27.311 172053139 00:44:27.311 06:50:18 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1327888 00:44:27.311 06:50:18 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1327888 /var/tmp/bperf.sock 00:44:27.311 06:50:18 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:27.311 06:50:18 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1327888 ']' 00:44:27.311 06:50:18 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:27.568 06:50:18 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:27.568 06:50:18 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:27.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:27.569 06:50:18 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:27.569 06:50:18 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:27.569 [2024-12-13 06:50:19.007562] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 23.11.0 initialization... 00:44:27.569 [2024-12-13 06:50:19.007608] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327888 ] 00:44:27.569 [2024-12-13 06:50:19.081587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:27.569 [2024-12-13 06:50:19.104212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:27.569 06:50:19 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:27.569 06:50:19 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:27.569 06:50:19 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:27.569 06:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:27.826 06:50:19 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:27.826 06:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:28.084 06:50:19 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:28.084 06:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:28.342 [2024-12-13 06:50:19.767351] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:28.343 nvme0n1 00:44:28.343 06:50:19 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:28.343 06:50:19 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:28.343 06:50:19 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:28.343 06:50:19 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:28.343 06:50:19 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:28.343 06:50:19 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.600 06:50:20 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:28.601 06:50:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:28.601 06:50:20 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:28.601 06:50:20 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:28.601 06:50:20 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:28.601 06:50:20 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:28.601 06:50:20 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:28.858 06:50:20 keyring_linux -- keyring/linux.sh@25 -- # sn=661165331 00:44:28.858 06:50:20 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:28.858 06:50:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:28.858 06:50:20 keyring_linux -- keyring/linux.sh@26 -- # [[ 661165331 == \6\6\1\1\6\5\3\3\1 ]] 00:44:28.858 06:50:20 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 661165331 00:44:28.859 06:50:20 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:28.859 06:50:20 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:28.859 Running I/O for 1 seconds... 00:44:29.793 21691.00 IOPS, 84.73 MiB/s 00:44:29.793 Latency(us) 00:44:29.793 [2024-12-13T05:50:21.447Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:29.793 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:29.793 nvme0n1 : 1.01 21690.59 84.73 0.00 0.00 5881.59 5024.43 10298.51 00:44:29.793 [2024-12-13T05:50:21.447Z] =================================================================================================================== 00:44:29.793 [2024-12-13T05:50:21.447Z] Total : 21690.59 84.73 0.00 0.00 5881.59 5024.43 10298.51 00:44:29.793 { 00:44:29.793 "results": [ 00:44:29.793 { 00:44:29.793 "job": "nvme0n1", 00:44:29.793 "core_mask": "0x2", 00:44:29.793 "workload": "randread", 00:44:29.793 "status": "finished", 00:44:29.793 "queue_depth": 128, 00:44:29.793 "io_size": 4096, 00:44:29.793 "runtime": 1.00592, 00:44:29.793 "iops": 21690.591697152857, 00:44:29.793 "mibps": 84.72887381700335, 00:44:29.793 "io_failed": 0, 00:44:29.793 "io_timeout": 0, 00:44:29.793 "avg_latency_us": 5881.588498970971, 00:44:29.793 "min_latency_us": 5024.426666666666, 00:44:29.793 "max_latency_us": 10298.514285714286 00:44:29.793 } 00:44:29.793 ], 00:44:29.793 "core_count": 1 00:44:29.793 } 00:44:29.793 06:50:21 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:29.793 06:50:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:30.051 06:50:21 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:30.051 06:50:21 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:30.051 06:50:21 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:30.051 06:50:21 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:30.051 06:50:21 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:30.051 06:50:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:30.309 06:50:21 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:30.309 06:50:21 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:30.309 06:50:21 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:30.309 06:50:21 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:30.309 06:50:21 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:30.309 06:50:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:30.567 [2024-12-13 06:50:22.004580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:30.567 [2024-12-13 06:50:22.005515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c453d0 (107): Transport endpoint is not connected 00:44:30.567 [2024-12-13 06:50:22.006510] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c453d0 (9): Bad file descriptor 00:44:30.567 [2024-12-13 06:50:22.007511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:30.567 [2024-12-13 06:50:22.007519] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:30.567 [2024-12-13 06:50:22.007526] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:30.567 [2024-12-13 06:50:22.007533] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:30.567 request: 00:44:30.567 { 00:44:30.567 "name": "nvme0", 00:44:30.567 "trtype": "tcp", 00:44:30.567 "traddr": "127.0.0.1", 00:44:30.567 "adrfam": "ipv4", 00:44:30.567 "trsvcid": "4420", 00:44:30.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:30.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:30.567 "prchk_reftag": false, 00:44:30.567 "prchk_guard": false, 00:44:30.567 "hdgst": false, 00:44:30.567 "ddgst": false, 00:44:30.567 "psk": ":spdk-test:key1", 00:44:30.567 "allow_unrecognized_csi": false, 00:44:30.567 "method": "bdev_nvme_attach_controller", 00:44:30.567 "req_id": 1 00:44:30.567 } 00:44:30.567 Got JSON-RPC error response 00:44:30.567 response: 00:44:30.567 { 00:44:30.567 "code": -5, 00:44:30.567 "message": "Input/output error" 00:44:30.567 } 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@33 -- # sn=661165331 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 661165331 00:44:30.567 1 links removed 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@33 -- # sn=172053139 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 172053139 00:44:30.567 1 links removed 00:44:30.567 06:50:22 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1327888 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327888 ']' 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327888 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327888 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327888' 00:44:30.567 killing process with pid 1327888 00:44:30.567 06:50:22 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327888 00:44:30.567 Received shutdown signal, test time was about 1.000000 seconds 00:44:30.567 00:44:30.567 Latency(us) 00:44:30.567 [2024-12-13T05:50:22.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:30.567 [2024-12-13T05:50:22.221Z] =================================================================================================================== 00:44:30.567 [2024-12-13T05:50:22.221Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:30.568 06:50:22 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327888 00:44:30.826 06:50:22 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1327875 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1327875 ']' 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1327875 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1327875 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1327875' 00:44:30.826 killing process with pid 1327875 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@973 -- # kill 1327875 00:44:30.826 06:50:22 keyring_linux -- common/autotest_common.sh@978 -- # wait 1327875 00:44:31.085 00:44:31.085 real 0m4.340s 00:44:31.085 user 0m8.216s 00:44:31.085 sys 0m1.465s 00:44:31.085 06:50:22 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:31.085 06:50:22 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:31.085 ************************************ 00:44:31.085 END TEST keyring_linux 00:44:31.085 ************************************ 00:44:31.085 06:50:22 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:31.085 06:50:22 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:31.085 06:50:22 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:31.085 06:50:22 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:31.085 06:50:22 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:31.085 06:50:22 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:31.085 06:50:22 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:31.085 06:50:22 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:31.085 06:50:22 -- common/autotest_common.sh@10 -- # set +x 00:44:31.085 06:50:22 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:31.085 06:50:22 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:31.085 06:50:22 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:31.085 06:50:22 -- common/autotest_common.sh@10 -- # set +x 00:44:36.358 INFO: APP EXITING 00:44:36.358 INFO: killing all VMs 00:44:36.358 INFO: killing vhost app 00:44:36.358 INFO: EXIT DONE 00:44:39.648 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:39.648 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:44:39.648 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:44:42.183 Cleaning 00:44:42.183 Removing: /var/run/dpdk/spdk0/config 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:44:42.183 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:44:42.183 Removing: /var/run/dpdk/spdk0/hugepage_info 00:44:42.183 Removing: /var/run/dpdk/spdk1/config 00:44:42.183 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:44:42.183 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:44:42.183 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:44:42.443 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:44:42.443 Removing: /var/run/dpdk/spdk1/hugepage_info 00:44:42.443 Removing: /var/run/dpdk/spdk2/config 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:44:42.443 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:44:42.443 Removing: /var/run/dpdk/spdk2/hugepage_info 00:44:42.443 Removing: /var/run/dpdk/spdk3/config 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:44:42.443 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:44:42.443 Removing: /var/run/dpdk/spdk3/hugepage_info 00:44:42.443 Removing: /var/run/dpdk/spdk4/config 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:44:42.443 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:44:42.443 Removing: /var/run/dpdk/spdk4/hugepage_info 00:44:42.443 Removing: /dev/shm/bdev_svc_trace.1 00:44:42.443 Removing: /dev/shm/nvmf_trace.0 00:44:42.443 Removing: /dev/shm/spdk_tgt_trace.pid772286 00:44:42.443 Removing: /var/run/dpdk/spdk0 00:44:42.443 Removing: /var/run/dpdk/spdk1 00:44:42.443 Removing: /var/run/dpdk/spdk2 00:44:42.443 Removing: /var/run/dpdk/spdk3 00:44:42.443 Removing: /var/run/dpdk/spdk4 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1010601 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1014988 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1016743 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1018389 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1018551 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1018772 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1018790 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1019283 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1021065 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1021810 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1022297 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1024342 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1024820 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1025515 00:44:42.443 Removing: /var/run/dpdk/spdk_pid1029491 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1034890 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1034891 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1034892 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1039000 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1042858 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1047573 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1082909 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1086840 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1092918 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1093985 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1095332 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1096686 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1101171 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1105430 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1109383 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1116634 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1116638 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1121343 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1121574 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1121738 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1122074 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1122204 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1123983 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1125593 00:44:42.702 Removing: /var/run/dpdk/spdk_pid1127155 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1128724 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1130411 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1132032 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1137784 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1138337 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1140038 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1141051 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1146652 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1149318 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1154481 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1159799 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1168724 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1175571 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1175573 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1193976 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1194444 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1194959 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1195566 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1196265 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1196749 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1197225 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1197880 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1201849 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1202090 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1208547 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1208764 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1213962 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1218117 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1227608 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1228094 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1232249 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1232481 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1236550 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1242162 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1244666 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1254918 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1263426 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1264981 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1265888 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1281692 00:44:42.703 Removing: /var/run/dpdk/spdk_pid1285429 00:44:42.962 Removing: /var/run/dpdk/spdk_pid1288058 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1295620 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1295707 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1301252 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1303050 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1304916 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1306146 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1308063 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1309104 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1317670 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1318227 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1318771 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1320990 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1321443 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1321900 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1325794 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1325862 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1327339 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1327875 00:44:42.963 Removing: /var/run/dpdk/spdk_pid1327888 00:44:42.963 Removing: /var/run/dpdk/spdk_pid770206 00:44:42.963 Removing: /var/run/dpdk/spdk_pid771228 00:44:42.963 Removing: /var/run/dpdk/spdk_pid772286 00:44:42.963 Removing: /var/run/dpdk/spdk_pid772913 00:44:42.963 Removing: /var/run/dpdk/spdk_pid773835 00:44:42.963 Removing: /var/run/dpdk/spdk_pid773990 00:44:42.963 Removing: /var/run/dpdk/spdk_pid775017 00:44:42.963 Removing: /var/run/dpdk/spdk_pid775024 00:44:42.963 Removing: /var/run/dpdk/spdk_pid775372 00:44:42.963 Removing: /var/run/dpdk/spdk_pid776858 00:44:42.963 Removing: /var/run/dpdk/spdk_pid778205 00:44:42.963 Removing: /var/run/dpdk/spdk_pid778596 00:44:42.963 Removing: /var/run/dpdk/spdk_pid778792 00:44:42.963 Removing: /var/run/dpdk/spdk_pid778983 00:44:42.963 Removing: /var/run/dpdk/spdk_pid779260 00:44:42.963 Removing: /var/run/dpdk/spdk_pid779504 00:44:42.963 Removing: /var/run/dpdk/spdk_pid779749 00:44:42.963 Removing: /var/run/dpdk/spdk_pid780023 00:44:42.963 Removing: /var/run/dpdk/spdk_pid780747 00:44:42.963 Removing: /var/run/dpdk/spdk_pid783676 00:44:42.963 Removing: /var/run/dpdk/spdk_pid783925 00:44:42.963 Removing: /var/run/dpdk/spdk_pid784175 00:44:42.963 Removing: /var/run/dpdk/spdk_pid784197 00:44:42.963 Removing: /var/run/dpdk/spdk_pid784667 00:44:42.963 Removing: /var/run/dpdk/spdk_pid784802 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785147 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785281 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785622 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785639 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785895 00:44:42.963 Removing: /var/run/dpdk/spdk_pid785900 00:44:42.963 Removing: /var/run/dpdk/spdk_pid786564 00:44:42.963 Removing: /var/run/dpdk/spdk_pid786819 00:44:42.963 Removing: /var/run/dpdk/spdk_pid787110 00:44:42.963 Removing: /var/run/dpdk/spdk_pid791143 00:44:42.963 Removing: /var/run/dpdk/spdk_pid795357 00:44:42.963 Removing: /var/run/dpdk/spdk_pid805364 00:44:42.963 Removing: /var/run/dpdk/spdk_pid806030 00:44:42.963 Removing: /var/run/dpdk/spdk_pid810229 00:44:42.963 Removing: /var/run/dpdk/spdk_pid810480 00:44:42.963 Removing: /var/run/dpdk/spdk_pid814667 00:44:42.963 Removing: /var/run/dpdk/spdk_pid820525 00:44:42.963 Removing: /var/run/dpdk/spdk_pid823173 00:44:42.963 Removing: /var/run/dpdk/spdk_pid833185 00:44:42.963 Removing: /var/run/dpdk/spdk_pid842664 00:44:43.222 Removing: /var/run/dpdk/spdk_pid844449 00:44:43.222 Removing: /var/run/dpdk/spdk_pid845349 00:44:43.222 Removing: /var/run/dpdk/spdk_pid861907 00:44:43.222 Removing: /var/run/dpdk/spdk_pid865910 00:44:43.222 Removing: /var/run/dpdk/spdk_pid947543 00:44:43.222 Removing: /var/run/dpdk/spdk_pid952826 00:44:43.222 Removing: /var/run/dpdk/spdk_pid958478 00:44:43.222 Removing: /var/run/dpdk/spdk_pid965342 00:44:43.222 Removing: /var/run/dpdk/spdk_pid965344 00:44:43.222 Removing: /var/run/dpdk/spdk_pid966235 00:44:43.222 Removing: /var/run/dpdk/spdk_pid967090 00:44:43.222 Removing: /var/run/dpdk/spdk_pid967828 00:44:43.222 Removing: /var/run/dpdk/spdk_pid968477 00:44:43.222 Removing: /var/run/dpdk/spdk_pid968479 00:44:43.222 Removing: /var/run/dpdk/spdk_pid968707 00:44:43.222 Removing: /var/run/dpdk/spdk_pid968863 00:44:43.222 Removing: /var/run/dpdk/spdk_pid968932 00:44:43.222 Removing: /var/run/dpdk/spdk_pid969775 00:44:43.222 Removing: /var/run/dpdk/spdk_pid970503 00:44:43.222 Removing: /var/run/dpdk/spdk_pid971388 00:44:43.222 Removing: /var/run/dpdk/spdk_pid972047 00:44:43.222 Removing: /var/run/dpdk/spdk_pid972055 00:44:43.222 Removing: /var/run/dpdk/spdk_pid972282 00:44:43.222 Removing: /var/run/dpdk/spdk_pid973279 00:44:43.222 Removing: /var/run/dpdk/spdk_pid974232 00:44:43.222 Removing: /var/run/dpdk/spdk_pid982347 00:44:43.222 Clean 00:44:43.222 06:50:34 -- common/autotest_common.sh@1453 -- # return 0 00:44:43.222 06:50:34 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:44:43.222 06:50:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:43.222 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:44:43.222 06:50:34 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:44:43.222 06:50:34 -- common/autotest_common.sh@732 -- # xtrace_disable 00:44:43.222 06:50:34 -- common/autotest_common.sh@10 -- # set +x 00:44:43.481 06:50:34 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:44:43.481 06:50:34 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:44:43.481 06:50:34 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:44:43.481 06:50:34 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:44:43.481 06:50:34 -- spdk/autotest.sh@398 -- # hostname 00:44:43.481 06:50:34 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:44:43.481 geninfo: WARNING: invalid characters removed from testname! 00:45:05.421 06:50:55 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:06.798 06:50:58 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:08.704 06:51:00 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:10.609 06:51:02 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:12.514 06:51:03 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:14.419 06:51:05 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:16.323 06:51:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:16.323 06:51:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:16.323 06:51:07 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:16.323 06:51:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:16.323 06:51:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:16.323 06:51:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:16.323 + [[ -n 675289 ]] 00:45:16.323 + sudo kill 675289 00:45:16.333 [Pipeline] } 00:45:16.349 [Pipeline] // stage 00:45:16.354 [Pipeline] } 00:45:16.368 [Pipeline] // timeout 00:45:16.373 [Pipeline] } 00:45:16.387 [Pipeline] // catchError 00:45:16.392 [Pipeline] } 00:45:16.406 [Pipeline] // wrap 00:45:16.412 [Pipeline] } 00:45:16.425 [Pipeline] // catchError 00:45:16.434 [Pipeline] stage 00:45:16.437 [Pipeline] { (Epilogue) 00:45:16.450 [Pipeline] catchError 00:45:16.451 [Pipeline] { 00:45:16.464 [Pipeline] echo 00:45:16.466 Cleanup processes 00:45:16.472 [Pipeline] sh 00:45:16.758 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:16.758 1339639 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:16.771 [Pipeline] sh 00:45:17.055 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:17.055 ++ grep -v 'sudo pgrep' 00:45:17.055 ++ awk '{print $1}' 00:45:17.055 + sudo kill -9 00:45:17.055 + true 00:45:17.066 [Pipeline] sh 00:45:17.348 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:29.563 [Pipeline] sh 00:45:29.848 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:29.848 Artifacts sizes are good 00:45:29.863 [Pipeline] archiveArtifacts 00:45:29.869 Archiving artifacts 00:45:30.028 [Pipeline] sh 00:45:30.313 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:30.326 [Pipeline] cleanWs 00:45:30.336 [WS-CLEANUP] Deleting project workspace... 00:45:30.336 [WS-CLEANUP] Deferred wipeout is used... 00:45:30.343 [WS-CLEANUP] done 00:45:30.345 [Pipeline] } 00:45:30.362 [Pipeline] // catchError 00:45:30.373 [Pipeline] sh 00:45:30.716 + logger -p user.info -t JENKINS-CI 00:45:30.749 [Pipeline] } 00:45:30.763 [Pipeline] // stage 00:45:30.768 [Pipeline] } 00:45:30.782 [Pipeline] // node 00:45:30.787 [Pipeline] End of Pipeline 00:45:30.828 Finished: SUCCESS